From 885b8709bb72519f28d90f97d63372dba17c763f Mon Sep 17 00:00:00 2001 From: eopXD Date: Wed, 1 Nov 2023 02:42:14 -0700 Subject: [PATCH 001/151] Add specifications for the bfloat16 intrinsics Extracted reference as a separate adoc file. Signed-off-by: eop Chen --- doc/header.adoc | 4 +++ doc/references.adoc | 67 +++++++++++++++++++++++++++++++++++ doc/vector-bfloat16-spec.adoc | 32 +++++++++++++++++ 3 files changed, 103 insertions(+) create mode 100644 doc/references.adoc create mode 100644 doc/vector-bfloat16-spec.adoc diff --git a/doc/header.adoc b/doc/header.adoc index f55edf3a4..407d4179c 100644 --- a/doc/header.adoc +++ b/doc/header.adoc @@ -46,6 +46,10 @@ may not conform to the future standard. include::preface.adoc[] include::rvv-intrinsic-spec.adoc[] +include::vector-bfloat16-spec.adoc[] + +include::references.adoc[] + include::rvv-intrinsic-examples.adoc[] diff --git a/doc/references.adoc b/doc/references.adoc new file mode 100644 index 000000000..d5197e47f --- /dev/null +++ b/doc/references.adoc @@ -0,0 +1,67 @@ +== References + +^0^https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc[Github - riscv/riscv-v-spec/v-spec.adoc] + +NOTE: Standard extensions are merged into `riscv/riscv-isa-manual` after ratification. There is an on-going pull request ^26^ for the "V" extension to be merged. At this moment this intrinsics specification still references the frozen draft ^0^. This reference will be updated in the future once the pull request has been merged. + +^1^https://github.com/riscv-non-isa/riscv-c-api-doc/blob/master/riscv-c-api.md[Github - riscv-non-isa/riscv-c-api-doc/riscv-c-api.md] + +^2^https://llvm.org/docs/RISCVUsage.html[User Guide for RISC-V Target] + +^3^https://gcc.gnu.org/onlinedocs/gcc/RISC-V-Options.html[RISC-V Options (Using the GNU Compiler Collection (GCC))] + +^4^Section 3.4.1 (Vector selected element width `vsew[2:0]`) in the specification ^0^ + +^5^Section 3.4.2 (Vector Register Grouping (`vlmul[2:0]``)) in the specification ^0^ + +^6^Section 3.4.3 (Vector Tail Agnostic and Vector Mask Agnostic `vta` and `vma`) in the specification ^0^ + +^7^Section 5.3 (Vector Masking) in the specification ^0^ + +^8^Section 3.8 (Vector Fixed-Point Rounding Mode Register `vxrm`) in the specification ^0^ + +^9^https://github.com/riscv-non-isa/riscv-elf-psabi-doc/blob/master/riscv-cc.adoc#vector-register-convention[psABI: Vector Register Convention] + +^10^https://riscv.org/wp-content/uploads/2017/05/riscv-spec-v2.2.pdf[The RISC-V Instruction Set Manual: 8.2 Floating-Point Control and Status Register] + +^11^Section 3.5 (Vector Length Register) in the specification ^0^ + +^12^Section 3.4.2 in the specification ^0^ + +^13^Section 11.13, 11.14, 13.6, 13.7 in the specification ^0^ + +^14^Section 4.5 (Mask Register Layout) in the specification ^0^ + +^15^Section 7.5 in the specification ^0^ + +^16^Section 7.8 in the specification ^0^ + +^17^Section 5.2 (Vector Operands) in the specification ^0^ + +^18^Section 6 (Configuration-Setting Instructions) in the specification ^0^ + +^19^Section 18 (Standrad Vector Extensions) in the specification ^0^ + +^20^Section 18.2 (Zve*: Vector Extensions for Embedded Processors) in the specification ^0^ + +^21^Section 12 (Vector Fixed-Point Arithmetic Instructions) in the specification ^0^ + +^22^Section 3.9 (3.9. Vector Fixed-Point Saturation Flag vxsat) in the specification ^0^ + +^23^Section 13 (Vector Floating-Point Instructions) in the specification ^0^ + +^24^Section 16.3.1 (Vector Slideup Instructions) in the specification ^0^ + +^25^Section 3.7 (Vector Start Index CSR `vstart`) in the specification ^0^ + +^26^https://github.com/riscv/riscv-isa-manual/pull/1088[riscv/riscv-isa-manual#1088] + +^27^Section 6.3 (Constraints on Setting `vl`) in the specficiation ^0^ + +^28^Section 6.4 (Example of stripmining and changes to SEW) in the specification ^0^ + +^29^Section 3.6 (Vector Byte Length `vlenb`) in the specification ^0^ + +^30^Section 16.6 (Whole Vector Register Move) in the specification ^0^ + +^31^https://github.com/riscv/riscv-bfloat16/releases[RISC-V BFloat16 Specification] \ No newline at end of file diff --git a/doc/vector-bfloat16-spec.adoc b/doc/vector-bfloat16-spec.adoc new file mode 100644 index 000000000..523577759 --- /dev/null +++ b/doc/vector-bfloat16-spec.adoc @@ -0,0 +1,32 @@ +== Intrinsics for BFloat16 (Brain Float 16) instruction set extensions + +The RISC-V vector C intrinsics supports intrinsics that exposes the control of BFloat16 (Brain Float 16) instruction set extensions ^31^. + +[[bf16-naming-scheme]] +=== Naming scheme + +The BFloat16 intrinsics follows the naming scheme defined under <>, with `bf` as the abbreviation for BFloat16 types in the function suffix. + +[[bf16-vector-programming-model]] +=== Control of the vector extension programming model + +The BFloat16 intrinsics follows provides the same control of the vector programming model defined under <>. Intrinsics that represents BFloat16 instructions that are affected by `frm` (`vfncvtbf16.f.f.w` and `vfwmaccbf16`) follow what is defined under <> and provides variants of <> and <>. + +[[bf16-type-system]] +=== Type system + +Floating-point types have EEW and EMUL encoded into the type. The first row describes the EMUL and the first column describes the data type and element width of the scalar type. + +Floating-point types with element widths of 16 (Types=`bfloat16_t`) require the `zfbfmin` and `zvfbfmin` extension to be specified in the architecture. + +.BFloat16 types +[options="autowidth,header",float="center",align="center",cols="<1,<2,<2,<2,<2,<2,<2,<2"] +|=== +| Types | EMUL=1/8 | EMUL=1/4 | EMUL=1/ 2 | EMUL=1 | EMUL=2 | EMUL=4 | EMUL=8 +| bfloat16_t | N/A | vbfloat16m4_t | vbfloat16mf2_t | vbfloat16m1_t | vbfloat16m2_t | vbfloat16m4_t | vbfloat16m8_t +|=== + +[[bf16-pseudo-intrinsics]] +=== Psuedo intrinsics + +The RISC-V vector BFloat16 types (provided under <>) also have pseudo intrinsics variants from <> to help variable declaration and manipulation across intrinsic types. From 068b10ebba567374f9eee58fc1c015ba40b0cc90 Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 06:27:36 -0700 Subject: [PATCH 002/151] Define intrinsics that enables the use of BFloat16 types Define (non-segment/segment) load/store intrinsics for bfloat16 values and also psuedo utility functions for manipulation across types. Signed-off-by: eop Chen --- .../rvv_intrinsic_gen/bfloat16_inst.py | 138 ++++++++++++++++++ .../rvv_intrinsic_gen/generator.py | 1 + .../templates/reint_op_template.py | 18 ++- .../rvv_intrinsic_gen/utils.py | 3 + 4 files changed, 155 insertions(+), 5 deletions(-) create mode 100644 rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py new file mode 100644 index 000000000..771f4fb14 --- /dev/null +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py @@ -0,0 +1,138 @@ +""" +-------------------------------------------------------------------------------- +Copyright 2023 SiFive Inc + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +-------------------------------------------------------------------------------- + +Declares the BFloat16 intrinsics and links to the templates for its +realization into function prototype. The documents are generated under the +sequence and grouping. +""" + +from intrinsic_decorator import IntrinsicDecorators +from generator import CompatibleHeaderGenerator +from templates import load_template +from templates import seg_load_template +from templates import store_template +from templates import seg_store_template +from templates import reint_op_template +from templates import get_set_diff_lmul_op_template +from templates import misc_op_template +from constants import LMULS + +SEWS = [16] +TYPES = ["bfloat"] + + +def gen(g): + if isinstance(g, CompatibleHeaderGenerator): + assert False, "BFloat16 intrinsics is supported after v1.0" + decorators = IntrinsicDecorators(g.has_tail_policy) + + #################################################################### + g.start_group("BFloat16 Vector Loads and Stores Intrinsics") + + g.function_group(load_template, "Vector Unit-Stride Load Intrinsics", + "bf16-vector-unit-stride-load", ["vle"], TYPES, SEWS, LMULS, + decorators.has_masking_maskedoff_policy) + + g.function_group(store_template, "Vector Unit-Stride Store Intrinsics", + "bf16-vector-unit-stride-store", ["vse"], TYPES, SEWS, LMULS, + decorators.has_masking_no_maskedoff) + + g.function_group(load_template, "Vector Strided Load Intrinsics", + "vector-strided-load", ["vlse"], TYPES, SEWS, LMULS, + decorators.has_masking_maskedoff_policy) + + g.function_group(store_template, "Vector Strided Store Intrinsics", + "vector-strided-store", ["vsse"], TYPES, SEWS, LMULS, + decorators.has_masking_no_maskedoff) + + g.function_group(load_template, "Vector Indexed Load Intrinsics", + "vector-indexed-load", ["vloxei", "vluxei"], TYPES, SEWS, + LMULS, decorators.has_masking_maskedoff_policy) + + g.function_group(store_template, "Vector Indexed Store Intrinsics", + "vector-indexed-store", ["vsoxei", "vsuxei"], TYPES, SEWS, + LMULS, decorators.has_masking_no_maskedoff) + + g.function_group(load_template, + "Unit-stride Fault-Only-First Loads Intrinsics", + "unit-stride-fault-only-first-loads", ["vleff"], TYPES, SEWS, + LMULS, decorators.has_masking_maskedoff_policy) + + #################################################################### + g.start_group("BFloat16 Vector Loads and Stores Segment Intrinsics") + + g.function_group(seg_load_template, + "Vector Unit-Stride Segment Load Intrinsics", + "vector-unit-stride-segment-load", ["vlseg", "vlsegff"], + TYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy) + + g.function_group(seg_store_template, + "Vector Unit-Stride Segment Store Intrinsics", + "vecrtor-unit-stride-segment-store", ["vsseg"], TYPES, SEWS, + LMULS, decorators.has_masking_no_maskedoff) + + g.function_group(seg_load_template, "Vector Strided Segment Load Intrinsics", + "vector-strided-segment-load", ["vlsseg"], TYPES, SEWS, + LMULS, decorators.has_masking_maskedoff_policy) + + g.function_group(seg_store_template, + "Vector Strided Segment Store Intrinsics", + "vector-strided-segment-store", ["vssseg"], TYPES, SEWS, + LMULS, decorators.has_masking_no_maskedoff) + + g.function_group(seg_load_template, "Vector Indexed Segment Load Intrinsics", + "vector-indexed-segment-load", ["vloxseg", "vluxseg"], TYPES, + SEWS, LMULS, decorators.has_masking_maskedoff_policy) + + g.function_group(seg_store_template, + "Vector Indexed Segment Store Intrinsics", + "vector-indexed-segment-store", ["vsoxseg", "vsuxseg"], + TYPES, SEWS, LMULS, decorators.has_masking_no_maskedoff) + + #################################################################### + g.start_group("BFloat16 Miscellaneous Vector Utility Intrinsics") + + g.function_group(reint_op_template, "Reinterpret Cast Conversion Intrinsics", + "reinterpret-cast-conversion", ["reinterpret"], "bfloat16", + SEWS, LMULS, decorators.has_no_masking) + + g.function_group(misc_op_template, "Vector LMUL Extension Intrinsics", + "vector-lmul-extensionn", ["vlmul_ext_v"], TYPES, SEWS, + LMULS, decorators.has_no_masking) + + g.function_group(misc_op_template, "Vector LMUL Truncation Intrinsics", + "vector-lmul-truncation", ["vlmul_trunc_v"], TYPES, SEWS, + LMULS, decorators.has_no_masking) + + g.function_group(misc_op_template, "Vector Initialization Intrinsics", + "vector-initialization", ["vundefined"], TYPES, SEWS, LMULS, + decorators.has_no_masking) + + g.function_group(get_set_diff_lmul_op_template, "Vector Insertion Intrinsics", + "vector-insertion", ["vset"], TYPES, SEWS, LMULS, + decorators.has_no_masking) + + g.function_group(get_set_diff_lmul_op_template, + "Vector Extraction Intrinsics", "vector-extraction", + ["vget"], TYPES, SEWS, LMULS, decorators.has_no_masking) + + g.function_group(misc_op_template, "Vector Creation Intrinsics", + "vector-creation", ["vcreate"], TYPES, SEWS, LMULS, + decorators.has_no_masking) + + #################################################################### + g.gen_prologue() diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 6acf8402f..949ad08a1 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -82,6 +82,7 @@ def func_name(name): name = name.replace("_int", "_i") name = name.replace("_float", "_f") name = name.replace("_bool", "_b") + name = name.replace("_bfloat", "_bf") # Follows the naming guideline under riscv-c-api-doc to add the `__riscv_` # suffix for all RVV intrinsics. name = "__riscv_" + name diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py index 1f67b5a7a..987f48b63 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py @@ -30,8 +30,6 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. - # FIXME: Argument 'type_list' is unused but required for interface - # consistency. We can prune it in the future. G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) @@ -39,9 +37,15 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): G.write("// Reinterpret between different type under the same SEW/LMUL\n") # Variable in list means # [dst type, dst short type, src type, src short type] - convert_set = [["float", "f", "int", "i"], ["float", "f", "uint", "u"], - ["uint", "u", "int", "i"], ["int", "i", "uint", "u"], - ["int", "i", "float", "f"], ["uint", "u", "float", "f"]] + if type_list == "bfloat16": + convert_set = [["bfloat", "bf", "int", + "i"], ["bfloat", "bf", "uint", "ui"], + ["int", "i", "bfloat", "bf"], + ["uint", "ui", "bfloat", "bf"]] + else: + convert_set = [["float", "f", "int", "i"], ["float", "f", "uint", "u"], + ["uint", "u", "int", "i"], ["int", "i", "uint", "u"], + ["int", "i", "float", "f"], ["uint", "u", "float", "f"]] for args in prod( OP=op_list, SEW=sew_list, TYPES=convert_set, LMUL=lmul_list): @@ -73,6 +77,10 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): **decorator.mask_args(type_helper.m, rt), src=src_type) + # Bfloat16 reinterpretations do not have variants below + if type_list == "bfloat16": + continue + G.write("// Reinterpret between different SEW under the same LMUL\n") # Variable in list means # [dst type, dst short type, src type, src short type] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/utils.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/utils.py index 190b0b426..6433eff12 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/utils.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/utils.py @@ -139,6 +139,9 @@ def s(self): return "double" else: assert False, "Unhandled SEW under float type" + if self.args["TYPE"] == "bfloat": + assert self.args["SEW"] == 16, "BFloat16 only, no other SEW allowed" + return "__bf16" return "{TYPE}{SEW}_t".format_map(self.args) @property From 677d6a563d2d415dd28eef1ae6622c97559101ca Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 08:10:06 -0700 Subject: [PATCH 003/151] [Makefile] Add BFloat16 targets Signed-off-by: eop Chen --- rvv-intrinsic-generator/Makefile | 87 +++++++++++++++++++++++++++++++- 1 file changed, 85 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index 3d26481ae..5044f51ff 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -51,6 +51,8 @@ PYTHONPATHS = $(RVV_INTRINSIC_GEN_PATH):$(ABS_VENDOR_PATH) PY3 := PYTHONPATH=$$PYTHONPATH:$(PYTHONPATHS) python3 # Main entry script of the generator MAIN := rvv_intrinsic_gen.main +# BFloat16 instruction scripts +BF16_INST := $(RVV_INTRINSIC_GEN_PATH)/bfloat16_inst.py # Script to clang-format the auto-generated adoc files CLANG_FORMAT_ADOC = clang_format_autogen # Main output directory is default to auto-generated @@ -60,6 +62,10 @@ OUTPUT_DIR := ../auto-generated DIR := $(abspath $(OUTPUT_DIR)) # Output directory for policy intrinsics POLICY_DIR := $(DIR)/policy_funcs +# Output directory for bfloat16 non-policy intrinsics +BF16_DIR := $(DIR)/bfloat16 +# Output directory for bfloat16 policy intrinsics +BF16_POLICY_DIR := $(BF16_DIR)/policy_funcs # Directory that stores the v0.10 unit tests LEGACY_API_TESTS_DIR := $(abspath ../legacy-api-unit-tests) # Derived variable to trigger option --vendor-inst @@ -140,15 +146,20 @@ endef # If VENDOR_GENERATOR_SCRIPT is defined, also trigger it in all. # NOTE: A possible enhancement to this is allow multiple targets be added here ifdef VENDOR_GENERATOR_SCRIPT -all: gen-document gen-test gen-compatible-header vendor-generator +all: gen-document gen-test gen-compatible-header bf16-all vendor-generator else -all: gen-document gen-test gen-compatible-header +all: gen-document gen-test gen-compatible-header bf16-all endif +bf16-all: gen-bf16-document gen-bf16-test + gen-document: non-overloaded-doc non-overloaded-docs overloaded-doc overloaded-docs +gen-bf16-document: bf16-non-overloaded-doc bf16-non-overloaded-docs bf16-overloaded-doc bf16-overloaded-docs gen-test: non-overloaded-test overloaded-test gen-llvm-test gen-gnu-test +gen-bf16-test: bf16-non-overloaded-test bf16-overloaded-test gen-bf16-llvm-test gen-compatible-header: non-policy-compatible-header policy-compatible-header non-policy-overloaded-compatible-header policy-overloaded-compatible-header gen-llvm-test: llvm-non-overloaded-test llvm-overloaded-test +gen-bf16-llvm-test: bf16-llvm-non-overloaded-test bf16-llvm-overloaded-test gen-gnu-test: gnu-overloaded-test gnu-non-overloaded-test # Generate all-in-one document for non-overloaded intrinsics @@ -221,6 +232,64 @@ gnu-overloaded-test: $(call gen_tests,$(DIR)/gnu-overloaded-tests,overloaded-test,--toolchain-type gnu) $(call gen_tests,$(POLICY_DIR)/gnu-overloaded-tests,overloaded-test,--toolchain-type gnu --has-policy) +# BFloat16 documents +bf16-non-overloaded-doc: + $(call gen_doc, $(BF16_DIR),intrinsic_funcs.adoc,non-overloaded-doc,--skip-default-inst --vendor-inst $(BF16_INST)) + $(call gen_doc, $(BF16_POLICY_DIR),intrinsic_funcs.adoc,non-overloaded-doc,--has-policy --skip-default-inst --vendor-inst $(BF16_INST)) + $(call clang_format_adoc, --file, $(BF16_DIR)/intrinsic_funcs.adoc) + $(call clang_format_adoc, --file, $(BF16_POLICY_DIR)/intrinsic_funcs.adoc) + +bf16-non-overloaded-docs: + $(call gen_doc, $(BF16_DIR),intrinsic_funcs,non-overloaded-docs,--skip-default-inst --vendor-inst $(BF16_INST)) + $(call gen_doc, $(BF16_POLICY_DIR),intrinsic_funcs,non-overloaded-docs,--has-policy --skip-default-inst --vendor-inst $(BF16_INST)) + $(call clang_format_adoc, --folder, $(BF16_DIR)/intrinsic_funcs) + $(call clang_format_adoc, --folder, $(BF16_POLICY_DIR)/intrinsic_funcs) + +bf16-overloaded-doc: + $(call gen_doc, $(BF16_DIR),overloaded_intrinsic_funcs.adoc,overloaded-doc,--skip-default-inst --vendor-inst $(BF16_INST)) + $(call gen_doc, $(BF16_POLICY_DIR),overloaded_intrinsic_funcs.adoc,overloaded-doc,--has-policy --skip-default-inst --vendor-inst $(BF16_INST)) + $(call clang_format_adoc, --file, $(BF16_DIR)/overloaded_intrinsic_funcs.adoc) + $(call clang_format_adoc, --file, $(BF16_POLICY_DIR)/overloaded_intrinsic_funcs.adoc) + +bf16-overloaded-docs: + $(call gen_doc, $(BF16_DIR),overloaded_intrinsic_funcs,overloaded-docs,--skip-default-inst --vendor-inst $(BF16_INST)) + $(call gen_doc, $(BF16_POLICY_DIR),overloaded_intrinsic_funcs,overloaded-docs,--has-policy --skip-default-inst --vendor-inst $(BF16_INST)) + $(call clang_format_adoc, --folder, $(BF16_DIR)/overloaded_intrinsic_funcs) + $(call clang_format_adoc, --folder, $(BF16_POLICY_DIR)/overloaded_intrinsic_funcs) + +# BFloat16 tests +# Generate non-overloaded intrinsic testing C source files +bf16-non-overloaded-test: + $(call gen_tests,$(BF16_DIR)/api-testing,non-overloaded-test,--skip-default-inst --vendor-inst $(BF16_INST)) + $(call gen_tests,$(BF16_POLICY_DIR)/api-testing,non-overloaded-test,--has-policy --skip-default-inst --vendor-inst $(BF16_INST)) + clang-format -i $(BF16_DIR)/api-testing/* + clang-format -i $(BF16_POLICY_DIR)/api-testing/* + +# Generate overloaded intrinsic testing C source files +bf16-overloaded-test: + $(call gen_tests,$(BF16_DIR)/overloaded-api-testing,overloaded-test,--skip-default-inst --vendor-inst $(BF16_INST)) + $(call gen_tests,$(BF16_POLICY_DIR)/overloaded-api-testing,overloaded-test,--has-policy --skip-default-inst --vendor-inst $(BF16_INST)) + clang-format -i $(BF16_DIR)/overloaded-api-testing/* + clang-format -i $(BF16_POLICY_DIR)/overloaded-api-testing/* + +# Generate non-overloaded intrinsic testing C source files +bf16-llvm-non-overloaded-test: + $(call gen_tests,$(BF16_DIR)/llvm-api-tests,non-overloaded-test,--toolchain-type llvm --skip-default-inst --vendor-inst $(BF16_INST)) + $(call gen_tests,$(BF16_POLICY_DIR)/llvm-api-tests,non-overloaded-test,--toolchain-type llvm --has-policy --skip-default-inst --vendor-inst $(BF16_INST)) + $(call replace_float, $(BF16_DIR)/llvm-api-tests) + $(call replace_float, $(BF16_POLICY_DIR)/llvm-api-tests) + clang-format -i $(BF16_DIR)/llvm-api-tests/* + clang-format -i $(BF16_POLICY_DIR)/overloaded-api-testing/* + +# Generate overloaded intrinsic testing C source files +bf16-llvm-overloaded-test: + $(call gen_tests,$(BF16_DIR)/llvm-overloaded-tests,overloaded-test,--toolchain-type llvm --skip-default-inst --vendor-inst $(BF16_INST)) + $(call gen_tests,$(BF16_POLICY_DIR)/llvm-overloaded-tests,overloaded-test,--toolchain-type llvm --has-policy --skip-default-inst --vendor-inst $(BF16_INST)) + $(call replace_float, $(BF16_DIR)/llvm-overloaded-tests) + $(call replace_float, $(BF16_POLICY_DIR)/llvm-overloaded-tests) + clang-format -i $(BF16_DIR)/llvm-overloaded-tests/* + clang-format -i $(BF16_POLICY_DIR)/llvm-overloaded-tests/* + # Generate the adaptor header for v0.10 non-policy-compatible-header: $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,non-policy.h,non-overloaded-compatible-header,) @@ -251,18 +320,32 @@ git-commit-all: make git-commit-autogen-doc OUTPUT_DIR=${OUTPUT_DIR} make git-commit-autogen-test OUTPUT_DIR=${OUTPUT_DIR} +git-commit-bf16-all: + make git-commit-autogen-bf16-doc OUTPUT_DIR=${OUTPUT_DIR} + make git-commit-autogen-bf16-test OUTPUT_DIR=${OUTPUT_DIR} + # Update and commit all documents under auto-generated git-commit-autogen-doc: make gen-document OUTPUT_DIR=${OUTPUT_DIR} git add ${DIR}/* git commit -m "[Auto-gen] Update documents under ${OUTPUT_DIR}. (make git-commit-autogen-doc)" +git-commit-autogen-bf16-doc: + make gen-bf16-document OUTPUT_DIR=${OUTPUT_DIR} + git add ${BF16_DIR}/* + git commit -m "[Auto-gen] Update bfloat16 documents under ${OUTPUT_DIR}. (make git-commit-autogen-bf16-doc)" + # Update and commit all testing C source files under auto-generated git-commit-autogen-test: make gen-test git add ${DIR}/* git commit -m "[Auto-gen] Update tests under ${OUTPUT_DIR}. (make git-commit-autogen-test)" +git-commit-autogen-bf16-test: + make gen-bf16-test + git add ${BF16_DIR}/* + git commit -m "[Auto-gen] Update bfloat16 tests under ${OUTPUT_DIR}. (make git-commit-autogen-bf16-test)" + # Update and commit compatible headers under auto-generated git-commit-autogen-compatible-header: make gen-compatible-header From 7f96a1d1f663384cfae2176e0a76028622531b7a Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 08:10:26 -0700 Subject: [PATCH 004/151] [Auto-gen] Update bfloat16 documents under ../auto-generated. (make git-commit-autogen-bf16-doc) --- auto-generated/bfloat16/intrinsic_funcs.adoc | 1698 ++++++++++++ ...16_vector_loads_and_stores_intrinsics.adoc | 262 ++ ...r_loads_and_stores_segment_intrinsics.adoc | 1077 ++++++++ ...scellaneous_vector_utility_intrinsics.adoc | 359 +++ .../bfloat16/overloaded_intrinsic_funcs.adoc | 1145 ++++++++ ...16_vector_loads_and_stores_intrinsics.adoc | 202 ++ ...r_loads_and_stores_segment_intrinsics.adoc | 750 ++++++ ...scellaneous_vector_utility_intrinsics.adoc | 193 ++ .../policy_funcs/intrinsic_funcs.adoc | 2393 +++++++++++++++++ ...16_vector_loads_and_stores_intrinsics.adoc | 372 +++ ...r_loads_and_stores_segment_intrinsics.adoc | 1991 ++++++++++++++ ...scellaneous_vector_utility_intrinsics.adoc | 30 + .../overloaded_intrinsic_funcs.adoc | 1708 ++++++++++++ ...16_vector_loads_and_stores_intrinsics.adoc | 334 +++ ...r_loads_and_stores_segment_intrinsics.adoc | 1344 +++++++++ ...scellaneous_vector_utility_intrinsics.adoc | 30 + 16 files changed, 13888 insertions(+) create mode 100644 auto-generated/bfloat16/intrinsic_funcs.adoc create mode 100644 auto-generated/bfloat16/intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc create mode 100644 auto-generated/bfloat16/intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc create mode 100644 auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc create mode 100644 auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc create mode 100644 auto-generated/bfloat16/overloaded_intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc create mode 100644 auto-generated/bfloat16/overloaded_intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc create mode 100644 auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc create mode 100644 auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc create mode 100644 auto-generated/bfloat16/policy_funcs/intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc create mode 100644 auto-generated/bfloat16/policy_funcs/intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc create mode 100644 auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc diff --git a/auto-generated/bfloat16/intrinsic_funcs.adoc b/auto-generated/bfloat16/intrinsic_funcs.adoc new file mode 100644 index 000000000..1ac981ad9 --- /dev/null +++ b/auto-generated/bfloat16/intrinsic_funcs.adoc @@ -0,0 +1,1698 @@ + +=== BFloat16 Vector Loads and Stores Intrinsics + +[[bf16-vector-unit-stride-load]] +==== Vector Unit-Stride Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16_v_bf16mf4(const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2(const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1(const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2(const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4(const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8(const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + size_t vl); +---- + +[[bf16-vector-unit-stride-store]] +==== Vector Unit-Stride Store Intrinsics + +[,c] +---- +void __riscv_vse16_v_bf16mf4(__bf16 *rs1, vbfloat16mf4_t vs3, size_t vl); +void __riscv_vse16_v_bf16mf2(__bf16 *rs1, vbfloat16mf2_t vs3, size_t vl); +void __riscv_vse16_v_bf16m1(__bf16 *rs1, vbfloat16m1_t vs3, size_t vl); +void __riscv_vse16_v_bf16m2(__bf16 *rs1, vbfloat16m2_t vs3, size_t vl); +void __riscv_vse16_v_bf16m4(__bf16 *rs1, vbfloat16m4_t vs3, size_t vl); +void __riscv_vse16_v_bf16m8(__bf16 *rs1, vbfloat16m8_t vs3, size_t vl); +// masked functions +void __riscv_vse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vbfloat16mf4_t vs3, + size_t vl); +void __riscv_vse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vbfloat16mf2_t vs3, + size_t vl); +void __riscv_vse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vbfloat16m8_t vs3, + size_t vl); +---- + +[[vector-strided-load]] +==== Vector Strided Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +---- + +[[vector-strided-store]] +==== Vector Strided Store Intrinsics + +[,c] +---- +void __riscv_vsse16_v_bf16mf4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16mf2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16m1(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16m2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16m4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16m8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, + size_t vl); +// masked functions +void __riscv_vsse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m8_t vs3, size_t vl); +---- + +[[vector-indexed-load]] +==== Vector Indexed Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +---- + +[[vector-indexed-store]] +==== Vector Indexed Store Intrinsics + +[,c] +---- +void __riscv_vsoxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsoxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsoxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsoxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl); +void __riscv_vsuxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsuxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsuxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsuxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl); +// masked functions +void __riscv_vsoxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl); +---- + +[[unit-stride-fault-only-first-loads]] +==== Unit-stride Fault-Only-First Loads Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8(const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +---- + +=== BFloat16 Vector Loads and Stores Segment Intrinsics + +[[vector-unit-stride-segment-load]] +==== Vector Unit-Stride Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2(const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3(const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4(const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5(const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6(const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7(const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8(const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2(const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3(const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4(const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5(const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6(const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7(const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8(const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2(const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3(const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4(const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5(const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6(const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7(const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8(const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2(const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3(const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4(const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2(const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +---- + +[[vecrtor-unit-stride-segment-store]] +==== Vector Unit-Stride Segment Store Intrinsics + +[,c] +---- +void __riscv_vsseg2e16_v_bf16mf4x2(__bf16 *rs1, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsseg3e16_v_bf16mf4x3(__bf16 *rs1, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsseg4e16_v_bf16mf4x4(__bf16 *rs1, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsseg5e16_v_bf16mf4x5(__bf16 *rs1, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsseg6e16_v_bf16mf4x6(__bf16 *rs1, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsseg7e16_v_bf16mf4x7(__bf16 *rs1, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsseg8e16_v_bf16mf4x8(__bf16 *rs1, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsseg2e16_v_bf16mf2x2(__bf16 *rs1, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsseg3e16_v_bf16mf2x3(__bf16 *rs1, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsseg4e16_v_bf16mf2x4(__bf16 *rs1, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsseg5e16_v_bf16mf2x5(__bf16 *rs1, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsseg6e16_v_bf16mf2x6(__bf16 *rs1, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsseg7e16_v_bf16mf2x7(__bf16 *rs1, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsseg8e16_v_bf16mf2x8(__bf16 *rs1, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsseg2e16_v_bf16m1x2(__bf16 *rs1, vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16m1x3(__bf16 *rs1, vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16m1x4(__bf16 *rs1, vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsseg5e16_v_bf16m1x5(__bf16 *rs1, vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsseg6e16_v_bf16m1x6(__bf16 *rs1, vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsseg7e16_v_bf16m1x7(__bf16 *rs1, vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsseg8e16_v_bf16m1x8(__bf16 *rs1, vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m2x2(__bf16 *rs1, vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16m2x3(__bf16 *rs1, vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16m2x4(__bf16 *rs1, vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m4x2(__bf16 *rs1, vbfloat16m4x2_t vs3, size_t vl); +// masked functions +void __riscv_vsseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, + vbfloat16m4x2_t vs3, size_t vl); +---- + +[[vector-strided-segment-load]] +==== Vector Strided Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +---- + +[[vector-strided-segment-store]] +==== Vector Strided Segment Store Intrinsics + +[,c] +---- +void __riscv_vssseg2e16_v_bf16mf4x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16mf4x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16mf4x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16mf4x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16mf4x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16mf4x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16mf4x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16mf2x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16mf2x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16mf2x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16mf2x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16mf2x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16mf2x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16mf2x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m1x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16m1x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16m1x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16m1x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16m1x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16m1x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16m1x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m2x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16m2x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16m2x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m4x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl); +// masked functions +void __riscv_vssseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl); +---- + +[[vector-indexed-segment-load]] +==== Vector Indexed Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2(const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2(const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_m(vbool4_t vm, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_m(vbool4_t vm, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +---- + +[[vector-indexed-segment-store]] +==== Vector Indexed Segment Store Intrinsics + +[,c] +---- +void __riscv_vsoxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl); +// masked functions +void __riscv_vsoxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, + vuint16m4_t vs2, vbfloat16m4x2_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, + vuint16m4_t vs2, vbfloat16m4x2_t vs3, + size_t vl); +---- + +=== BFloat16 Miscellaneous Vector Utility Intrinsics + +[[reinterpret-cast-conversion]] +==== Reinterpret Cast Conversion Intrinsics + +[,c] +---- +// Reinterpret between different type under the same SEW/LMUL +vbfloat16mf4_t __riscv_vreinterpret_v_i16mf4_bf16mf4(vint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_v_i16mf2_bf16mf2(vint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_v_i16m1_bf16m1(vint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_v_i16m2_bf16m2(vint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_v_i16m4_bf16m4(vint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_v_i16m8_bf16m8(vint16m8_t src); +vbfloat16mf4_t __riscv_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src); +vint16mf4_t __riscv_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src); +vint16mf2_t __riscv_vreinterpret_v_bf16mf2_i16mf2(vbfloat16mf2_t src); +vint16m1_t __riscv_vreinterpret_v_bf16m1_i16m1(vbfloat16m1_t src); +vint16m2_t __riscv_vreinterpret_v_bf16m2_i16m2(vbfloat16m2_t src); +vint16m4_t __riscv_vreinterpret_v_bf16m4_i16m4(vbfloat16m4_t src); +vint16m8_t __riscv_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src); +vuint16mf4_t __riscv_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src); +vuint16mf2_t __riscv_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src); +vuint16m1_t __riscv_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src); +vuint16m2_t __riscv_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src); +vuint16m4_t __riscv_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src); +vuint16m8_t __riscv_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src); +---- + +[[vector-lmul-extensionn]] +==== Vector LMUL Extension Intrinsics + +[,c] +---- +vbfloat16mf2_t __riscv_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value); +---- + +[[vector-lmul-truncation]] +==== Vector LMUL Truncation Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value); +vbfloat16m2_t __riscv_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value); +vbfloat16m2_t __riscv_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value); +vbfloat16m4_t __riscv_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value); +---- + +[[vector-initialization]] +==== Vector Initialization Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vundefined_bf16mf4(); +vbfloat16mf2_t __riscv_vundefined_bf16mf2(); +vbfloat16m1_t __riscv_vundefined_bf16m1(); +vbfloat16m2_t __riscv_vundefined_bf16m2(); +vbfloat16m4_t __riscv_vundefined_bf16m4(); +vbfloat16m8_t __riscv_vundefined_bf16m8(); +vbfloat16mf4x2_t __riscv_vundefined_bf16mf4x2(); +vbfloat16mf4x3_t __riscv_vundefined_bf16mf4x3(); +vbfloat16mf4x4_t __riscv_vundefined_bf16mf4x4(); +vbfloat16mf4x5_t __riscv_vundefined_bf16mf4x5(); +vbfloat16mf4x6_t __riscv_vundefined_bf16mf4x6(); +vbfloat16mf4x7_t __riscv_vundefined_bf16mf4x7(); +vbfloat16mf4x8_t __riscv_vundefined_bf16mf4x8(); +vbfloat16mf2x2_t __riscv_vundefined_bf16mf2x2(); +vbfloat16mf2x3_t __riscv_vundefined_bf16mf2x3(); +vbfloat16mf2x4_t __riscv_vundefined_bf16mf2x4(); +vbfloat16mf2x5_t __riscv_vundefined_bf16mf2x5(); +vbfloat16mf2x6_t __riscv_vundefined_bf16mf2x6(); +vbfloat16mf2x7_t __riscv_vundefined_bf16mf2x7(); +vbfloat16mf2x8_t __riscv_vundefined_bf16mf2x8(); +vbfloat16m1x2_t __riscv_vundefined_bf16m1x2(); +vbfloat16m1x3_t __riscv_vundefined_bf16m1x3(); +vbfloat16m1x4_t __riscv_vundefined_bf16m1x4(); +vbfloat16m1x5_t __riscv_vundefined_bf16m1x5(); +vbfloat16m1x6_t __riscv_vundefined_bf16m1x6(); +vbfloat16m1x7_t __riscv_vundefined_bf16m1x7(); +vbfloat16m1x8_t __riscv_vundefined_bf16m1x8(); +vbfloat16m2x2_t __riscv_vundefined_bf16m2x2(); +vbfloat16m2x3_t __riscv_vundefined_bf16m2x3(); +vbfloat16m2x4_t __riscv_vundefined_bf16m2x4(); +vbfloat16m4x2_t __riscv_vundefined_bf16m4x2(); +---- + +[[vector-insertion]] +==== Vector Insertion Intrinsics + +[,c] +---- +vbfloat16m2_t __riscv_vset_v_bf16m1_bf16m2(vbfloat16m2_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m4_t __riscv_vset_v_bf16m1_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m4_t __riscv_vset_v_bf16m2_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m8_t __riscv_vset_v_bf16m1_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m8_t __riscv_vset_v_bf16m2_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m8_t __riscv_vset_v_bf16m4_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m4_t value); +vbfloat16mf4x2_t __riscv_vset_v_bf16mf4_bf16mf4x2(vbfloat16mf4x2_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x3_t __riscv_vset_v_bf16mf4_bf16mf4x3(vbfloat16mf4x3_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x4_t __riscv_vset_v_bf16mf4_bf16mf4x4(vbfloat16mf4x4_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x5_t __riscv_vset_v_bf16mf4_bf16mf4x5(vbfloat16mf4x5_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x6_t __riscv_vset_v_bf16mf4_bf16mf4x6(vbfloat16mf4x6_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x7_t __riscv_vset_v_bf16mf4_bf16mf4x7(vbfloat16mf4x7_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x8_t __riscv_vset_v_bf16mf4_bf16mf4x8(vbfloat16mf4x8_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf2x2_t __riscv_vset_v_bf16mf2_bf16mf2x2(vbfloat16mf2x2_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x3_t __riscv_vset_v_bf16mf2_bf16mf2x3(vbfloat16mf2x3_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x4_t __riscv_vset_v_bf16mf2_bf16mf2x4(vbfloat16mf2x4_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x5_t __riscv_vset_v_bf16mf2_bf16mf2x5(vbfloat16mf2x5_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x6_t __riscv_vset_v_bf16mf2_bf16mf2x6(vbfloat16mf2x6_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x7_t __riscv_vset_v_bf16mf2_bf16mf2x7(vbfloat16mf2x7_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x8_t __riscv_vset_v_bf16mf2_bf16mf2x8(vbfloat16mf2x8_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16m1x2_t __riscv_vset_v_bf16m1_bf16m1x2(vbfloat16m1x2_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x3_t __riscv_vset_v_bf16m1_bf16m1x3(vbfloat16m1x3_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x4_t __riscv_vset_v_bf16m1_bf16m1x4(vbfloat16m1x4_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x5_t __riscv_vset_v_bf16m1_bf16m1x5(vbfloat16m1x5_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x6_t __riscv_vset_v_bf16m1_bf16m1x6(vbfloat16m1x6_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x7_t __riscv_vset_v_bf16m1_bf16m1x7(vbfloat16m1x7_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x8_t __riscv_vset_v_bf16m1_bf16m1x8(vbfloat16m1x8_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m2x2_t __riscv_vset_v_bf16m2_bf16m2x2(vbfloat16m2x2_t dest, + size_t index, + vbfloat16m2_t value); +vbfloat16m2x3_t __riscv_vset_v_bf16m2_bf16m2x3(vbfloat16m2x3_t dest, + size_t index, + vbfloat16m2_t value); +vbfloat16m2x4_t __riscv_vset_v_bf16m2_bf16m2x4(vbfloat16m2x4_t dest, + size_t index, + vbfloat16m2_t value); +vbfloat16m4x2_t __riscv_vset_v_bf16m4_bf16m4x2(vbfloat16m4x2_t dest, + size_t index, + vbfloat16m4_t value); +---- + +[[vector-extraction]] +==== Vector Extraction Intrinsics + +[,c] +---- +vbfloat16m1_t __riscv_vget_v_bf16m2_bf16m1(vbfloat16m2_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m4_bf16m1(vbfloat16m4_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m8_bf16m1(vbfloat16m8_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m4_bf16m2(vbfloat16m4_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m8_bf16m2(vbfloat16m8_t src, size_t index); +vbfloat16m4_t __riscv_vget_v_bf16m8_bf16m4(vbfloat16m8_t src, size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x2_bf16mf4(vbfloat16mf4x2_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x3_bf16mf4(vbfloat16mf4x3_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x4_bf16mf4(vbfloat16mf4x4_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x5_bf16mf4(vbfloat16mf4x5_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x6_bf16mf4(vbfloat16mf4x6_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x7_bf16mf4(vbfloat16mf4x7_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x8_bf16mf4(vbfloat16mf4x8_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x2_bf16mf2(vbfloat16mf2x2_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x3_bf16mf2(vbfloat16mf2x3_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x4_bf16mf2(vbfloat16mf2x4_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x5_bf16mf2(vbfloat16mf2x5_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x6_bf16mf2(vbfloat16mf2x6_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x7_bf16mf2(vbfloat16mf2x7_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x8_bf16mf2(vbfloat16mf2x8_t src, + size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x2_bf16m1(vbfloat16m1x2_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x3_bf16m1(vbfloat16m1x3_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x4_bf16m1(vbfloat16m1x4_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x5_bf16m1(vbfloat16m1x5_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x6_bf16m1(vbfloat16m1x6_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x7_bf16m1(vbfloat16m1x7_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x8_bf16m1(vbfloat16m1x8_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m2x2_bf16m2(vbfloat16m2x2_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m2x3_bf16m2(vbfloat16m2x3_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m2x4_bf16m2(vbfloat16m2x4_t src, size_t index); +vbfloat16m4_t __riscv_vget_v_bf16m4x2_bf16m4(vbfloat16m4x2_t src, size_t index); +---- + +[[vector-creation]] +==== Vector Creation Intrinsics + +[,c] +---- +vbfloat16m2_t __riscv_vcreate_v_bf16m1_bf16m2(vbfloat16m1_t v0, + vbfloat16m1_t v1); +vbfloat16m4_t __riscv_vcreate_v_bf16m1_bf16m4(vbfloat16m1_t v0, + vbfloat16m1_t v1, + vbfloat16m1_t v2, + vbfloat16m1_t v3); +vbfloat16m8_t __riscv_vcreate_v_bf16m1_bf16m8( + vbfloat16m1_t v0, vbfloat16m1_t v1, vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, vbfloat16m1_t v6, vbfloat16m1_t v7); +vbfloat16m4_t __riscv_vcreate_v_bf16m2_bf16m4(vbfloat16m2_t v0, + vbfloat16m2_t v1); +vbfloat16m8_t __riscv_vcreate_v_bf16m2_bf16m8(vbfloat16m2_t v0, + vbfloat16m2_t v1, + vbfloat16m2_t v2, + vbfloat16m2_t v3); +vbfloat16m8_t __riscv_vcreate_v_bf16m4_bf16m8(vbfloat16m4_t v0, + vbfloat16m4_t v1); +vbfloat16mf4x2_t __riscv_vcreate_v_bf16mf4x2(vbfloat16mf4_t v0, + vbfloat16mf4_t v1); +vbfloat16mf4x3_t __riscv_vcreate_v_bf16mf4x3(vbfloat16mf4_t v0, + vbfloat16mf4_t v1, + vbfloat16mf4_t v2); +vbfloat16mf4x4_t __riscv_vcreate_v_bf16mf4x4(vbfloat16mf4_t v0, + vbfloat16mf4_t v1, + vbfloat16mf4_t v2, + vbfloat16mf4_t v3); +vbfloat16mf4x5_t __riscv_vcreate_v_bf16mf4x5(vbfloat16mf4_t v0, + vbfloat16mf4_t v1, + vbfloat16mf4_t v2, + vbfloat16mf4_t v3, + vbfloat16mf4_t v4); +vbfloat16mf4x6_t +__riscv_vcreate_v_bf16mf4x6(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5); +vbfloat16mf4x7_t __riscv_vcreate_v_bf16mf4x7( + vbfloat16mf4_t v0, vbfloat16mf4_t v1, vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5, vbfloat16mf4_t v6); +vbfloat16mf4x8_t __riscv_vcreate_v_bf16mf4x8( + vbfloat16mf4_t v0, vbfloat16mf4_t v1, vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5, vbfloat16mf4_t v6, vbfloat16mf4_t v7); +vbfloat16mf2x2_t __riscv_vcreate_v_bf16mf2x2(vbfloat16mf2_t v0, + vbfloat16mf2_t v1); +vbfloat16mf2x3_t __riscv_vcreate_v_bf16mf2x3(vbfloat16mf2_t v0, + vbfloat16mf2_t v1, + vbfloat16mf2_t v2); +vbfloat16mf2x4_t __riscv_vcreate_v_bf16mf2x4(vbfloat16mf2_t v0, + vbfloat16mf2_t v1, + vbfloat16mf2_t v2, + vbfloat16mf2_t v3); +vbfloat16mf2x5_t __riscv_vcreate_v_bf16mf2x5(vbfloat16mf2_t v0, + vbfloat16mf2_t v1, + vbfloat16mf2_t v2, + vbfloat16mf2_t v3, + vbfloat16mf2_t v4); +vbfloat16mf2x6_t +__riscv_vcreate_v_bf16mf2x6(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5); +vbfloat16mf2x7_t __riscv_vcreate_v_bf16mf2x7( + vbfloat16mf2_t v0, vbfloat16mf2_t v1, vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5, vbfloat16mf2_t v6); +vbfloat16mf2x8_t __riscv_vcreate_v_bf16mf2x8( + vbfloat16mf2_t v0, vbfloat16mf2_t v1, vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5, vbfloat16mf2_t v6, vbfloat16mf2_t v7); +vbfloat16m1x2_t __riscv_vcreate_v_bf16m1x2(vbfloat16m1_t v0, vbfloat16m1_t v1); +vbfloat16m1x3_t __riscv_vcreate_v_bf16m1x3(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2); +vbfloat16m1x4_t __riscv_vcreate_v_bf16m1x4(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3); +vbfloat16m1x5_t __riscv_vcreate_v_bf16m1x5(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4); +vbfloat16m1x6_t __riscv_vcreate_v_bf16m1x6(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5); +vbfloat16m1x7_t __riscv_vcreate_v_bf16m1x7(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6); +vbfloat16m1x8_t __riscv_vcreate_v_bf16m1x8(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6, vbfloat16m1_t v7); +vbfloat16m2x2_t __riscv_vcreate_v_bf16m2x2(vbfloat16m2_t v0, vbfloat16m2_t v1); +vbfloat16m2x3_t __riscv_vcreate_v_bf16m2x3(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2); +vbfloat16m2x4_t __riscv_vcreate_v_bf16m2x4(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2, vbfloat16m2_t v3); +vbfloat16m4x2_t __riscv_vcreate_v_bf16m4x2(vbfloat16m4_t v0, vbfloat16m4_t v1); +---- diff --git a/auto-generated/bfloat16/intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc new file mode 100644 index 000000000..db9f6077c --- /dev/null +++ b/auto-generated/bfloat16/intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc @@ -0,0 +1,262 @@ + +=== BFloat16 Vector Loads and Stores Intrinsics + +[[bf16-vector-unit-stride-load]] +==== Vector Unit-Stride Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16_v_bf16mf4(const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2(const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1(const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2(const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4(const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8(const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + size_t vl); +---- + +[[bf16-vector-unit-stride-store]] +==== Vector Unit-Stride Store Intrinsics + +[,c] +---- +void __riscv_vse16_v_bf16mf4(__bf16 *rs1, vbfloat16mf4_t vs3, size_t vl); +void __riscv_vse16_v_bf16mf2(__bf16 *rs1, vbfloat16mf2_t vs3, size_t vl); +void __riscv_vse16_v_bf16m1(__bf16 *rs1, vbfloat16m1_t vs3, size_t vl); +void __riscv_vse16_v_bf16m2(__bf16 *rs1, vbfloat16m2_t vs3, size_t vl); +void __riscv_vse16_v_bf16m4(__bf16 *rs1, vbfloat16m4_t vs3, size_t vl); +void __riscv_vse16_v_bf16m8(__bf16 *rs1, vbfloat16m8_t vs3, size_t vl); +// masked functions +void __riscv_vse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vbfloat16mf4_t vs3, + size_t vl); +void __riscv_vse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vbfloat16mf2_t vs3, + size_t vl); +void __riscv_vse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vbfloat16m8_t vs3, + size_t vl); +---- + +[[vector-strided-load]] +==== Vector Strided Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +---- + +[[vector-strided-store]] +==== Vector Strided Store Intrinsics + +[,c] +---- +void __riscv_vsse16_v_bf16mf4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16mf2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16m1(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16m2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16m4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsse16_v_bf16m8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, + size_t vl); +// masked functions +void __riscv_vsse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m8_t vs3, size_t vl); +---- + +[[vector-indexed-load]] +==== Vector Indexed Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +---- + +[[vector-indexed-store]] +==== Vector Indexed Store Intrinsics + +[,c] +---- +void __riscv_vsoxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsoxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsoxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsoxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl); +void __riscv_vsuxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsuxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsuxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsuxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl); +// masked functions +void __riscv_vsoxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsoxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsuxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl); +---- + +[[unit-stride-fault-only-first-loads]] +==== Unit-stride Fault-Only-First Loads Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4(const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8(const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +---- diff --git a/auto-generated/bfloat16/intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc new file mode 100644 index 000000000..48e19775a --- /dev/null +++ b/auto-generated/bfloat16/intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc @@ -0,0 +1,1077 @@ + +=== BFloat16 Vector Loads and Stores Segment Intrinsics + +[[vector-unit-stride-segment-load]] +==== Vector Unit-Stride Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2(const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3(const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4(const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5(const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6(const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7(const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8(const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2(const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3(const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4(const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5(const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6(const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7(const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8(const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2(const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3(const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4(const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5(const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6(const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7(const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8(const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2(const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3(const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4(const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2(const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4(const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2(const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_m(vbool16_t vm, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +---- + +[[vecrtor-unit-stride-segment-store]] +==== Vector Unit-Stride Segment Store Intrinsics + +[,c] +---- +void __riscv_vsseg2e16_v_bf16mf4x2(__bf16 *rs1, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsseg3e16_v_bf16mf4x3(__bf16 *rs1, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsseg4e16_v_bf16mf4x4(__bf16 *rs1, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsseg5e16_v_bf16mf4x5(__bf16 *rs1, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsseg6e16_v_bf16mf4x6(__bf16 *rs1, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsseg7e16_v_bf16mf4x7(__bf16 *rs1, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsseg8e16_v_bf16mf4x8(__bf16 *rs1, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsseg2e16_v_bf16mf2x2(__bf16 *rs1, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsseg3e16_v_bf16mf2x3(__bf16 *rs1, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsseg4e16_v_bf16mf2x4(__bf16 *rs1, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsseg5e16_v_bf16mf2x5(__bf16 *rs1, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsseg6e16_v_bf16mf2x6(__bf16 *rs1, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsseg7e16_v_bf16mf2x7(__bf16 *rs1, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsseg8e16_v_bf16mf2x8(__bf16 *rs1, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsseg2e16_v_bf16m1x2(__bf16 *rs1, vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16m1x3(__bf16 *rs1, vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16m1x4(__bf16 *rs1, vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsseg5e16_v_bf16m1x5(__bf16 *rs1, vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsseg6e16_v_bf16m1x6(__bf16 *rs1, vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsseg7e16_v_bf16m1x7(__bf16 *rs1, vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsseg8e16_v_bf16m1x8(__bf16 *rs1, vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m2x2(__bf16 *rs1, vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16m2x3(__bf16 *rs1, vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16m2x4(__bf16 *rs1, vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m4x2(__bf16 *rs1, vbfloat16m4x2_t vs3, size_t vl); +// masked functions +void __riscv_vsseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, + vbfloat16m4x2_t vs3, size_t vl); +---- + +[[vector-strided-segment-load]] +==== Vector Strided Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8(const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +---- + +[[vector-strided-segment-store]] +==== Vector Strided Segment Store Intrinsics + +[,c] +---- +void __riscv_vssseg2e16_v_bf16mf4x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16mf4x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16mf4x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16mf4x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16mf4x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16mf4x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16mf4x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16mf2x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16mf2x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16mf2x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16mf2x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16mf2x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16mf2x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16mf2x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m1x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16m1x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16m1x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16m1x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16m1x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16m1x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16m1x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m2x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16m2x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16m2x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m4x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl); +// masked functions +void __riscv_vssseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vssseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vssseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vssseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vssseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vssseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vssseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vssseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl); +---- + +[[vector-indexed-segment-load]] +==== Vector Indexed Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2(const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8(const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4(const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2(const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_m(vbool4_t vm, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_m(vbool16_t vm, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_m(vbool8_t vm, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_m(vbool4_t vm, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +---- + +[[vector-indexed-segment-store]] +==== Vector Indexed Segment Store Intrinsics + +[,c] +---- +void __riscv_vsoxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl); +// masked functions +void __riscv_vsoxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, + vuint16m4_t vs2, vbfloat16m4x2_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, + vuint16m1_t vs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, + vuint16m2_t vs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, + vuint16m4_t vs2, vbfloat16m4x2_t vs3, + size_t vl); +---- diff --git a/auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc new file mode 100644 index 000000000..5c8c2a665 --- /dev/null +++ b/auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc @@ -0,0 +1,359 @@ + +=== BFloat16 Miscellaneous Vector Utility Intrinsics + +[[reinterpret-cast-conversion]] +==== Reinterpret Cast Conversion Intrinsics + +[,c] +---- +// Reinterpret between different type under the same SEW/LMUL +vbfloat16mf4_t __riscv_vreinterpret_v_i16mf4_bf16mf4(vint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_v_i16mf2_bf16mf2(vint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_v_i16m1_bf16m1(vint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_v_i16m2_bf16m2(vint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_v_i16m4_bf16m4(vint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_v_i16m8_bf16m8(vint16m8_t src); +vbfloat16mf4_t __riscv_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src); +vint16mf4_t __riscv_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src); +vint16mf2_t __riscv_vreinterpret_v_bf16mf2_i16mf2(vbfloat16mf2_t src); +vint16m1_t __riscv_vreinterpret_v_bf16m1_i16m1(vbfloat16m1_t src); +vint16m2_t __riscv_vreinterpret_v_bf16m2_i16m2(vbfloat16m2_t src); +vint16m4_t __riscv_vreinterpret_v_bf16m4_i16m4(vbfloat16m4_t src); +vint16m8_t __riscv_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src); +vuint16mf4_t __riscv_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src); +vuint16mf2_t __riscv_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src); +vuint16m1_t __riscv_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src); +vuint16m2_t __riscv_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src); +vuint16m4_t __riscv_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src); +vuint16m8_t __riscv_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src); +---- + +[[vector-lmul-extensionn]] +==== Vector LMUL Extension Intrinsics + +[,c] +---- +vbfloat16mf2_t __riscv_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value); +---- + +[[vector-lmul-truncation]] +==== Vector LMUL Truncation Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value); +vbfloat16m2_t __riscv_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value); +vbfloat16m2_t __riscv_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value); +vbfloat16m4_t __riscv_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value); +---- + +[[vector-initialization]] +==== Vector Initialization Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vundefined_bf16mf4(); +vbfloat16mf2_t __riscv_vundefined_bf16mf2(); +vbfloat16m1_t __riscv_vundefined_bf16m1(); +vbfloat16m2_t __riscv_vundefined_bf16m2(); +vbfloat16m4_t __riscv_vundefined_bf16m4(); +vbfloat16m8_t __riscv_vundefined_bf16m8(); +vbfloat16mf4x2_t __riscv_vundefined_bf16mf4x2(); +vbfloat16mf4x3_t __riscv_vundefined_bf16mf4x3(); +vbfloat16mf4x4_t __riscv_vundefined_bf16mf4x4(); +vbfloat16mf4x5_t __riscv_vundefined_bf16mf4x5(); +vbfloat16mf4x6_t __riscv_vundefined_bf16mf4x6(); +vbfloat16mf4x7_t __riscv_vundefined_bf16mf4x7(); +vbfloat16mf4x8_t __riscv_vundefined_bf16mf4x8(); +vbfloat16mf2x2_t __riscv_vundefined_bf16mf2x2(); +vbfloat16mf2x3_t __riscv_vundefined_bf16mf2x3(); +vbfloat16mf2x4_t __riscv_vundefined_bf16mf2x4(); +vbfloat16mf2x5_t __riscv_vundefined_bf16mf2x5(); +vbfloat16mf2x6_t __riscv_vundefined_bf16mf2x6(); +vbfloat16mf2x7_t __riscv_vundefined_bf16mf2x7(); +vbfloat16mf2x8_t __riscv_vundefined_bf16mf2x8(); +vbfloat16m1x2_t __riscv_vundefined_bf16m1x2(); +vbfloat16m1x3_t __riscv_vundefined_bf16m1x3(); +vbfloat16m1x4_t __riscv_vundefined_bf16m1x4(); +vbfloat16m1x5_t __riscv_vundefined_bf16m1x5(); +vbfloat16m1x6_t __riscv_vundefined_bf16m1x6(); +vbfloat16m1x7_t __riscv_vundefined_bf16m1x7(); +vbfloat16m1x8_t __riscv_vundefined_bf16m1x8(); +vbfloat16m2x2_t __riscv_vundefined_bf16m2x2(); +vbfloat16m2x3_t __riscv_vundefined_bf16m2x3(); +vbfloat16m2x4_t __riscv_vundefined_bf16m2x4(); +vbfloat16m4x2_t __riscv_vundefined_bf16m4x2(); +---- + +[[vector-insertion]] +==== Vector Insertion Intrinsics + +[,c] +---- +vbfloat16m2_t __riscv_vset_v_bf16m1_bf16m2(vbfloat16m2_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m4_t __riscv_vset_v_bf16m1_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m4_t __riscv_vset_v_bf16m2_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m8_t __riscv_vset_v_bf16m1_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m8_t __riscv_vset_v_bf16m2_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m8_t __riscv_vset_v_bf16m4_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m4_t value); +vbfloat16mf4x2_t __riscv_vset_v_bf16mf4_bf16mf4x2(vbfloat16mf4x2_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x3_t __riscv_vset_v_bf16mf4_bf16mf4x3(vbfloat16mf4x3_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x4_t __riscv_vset_v_bf16mf4_bf16mf4x4(vbfloat16mf4x4_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x5_t __riscv_vset_v_bf16mf4_bf16mf4x5(vbfloat16mf4x5_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x6_t __riscv_vset_v_bf16mf4_bf16mf4x6(vbfloat16mf4x6_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x7_t __riscv_vset_v_bf16mf4_bf16mf4x7(vbfloat16mf4x7_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x8_t __riscv_vset_v_bf16mf4_bf16mf4x8(vbfloat16mf4x8_t dest, + size_t index, + vbfloat16mf4_t value); +vbfloat16mf2x2_t __riscv_vset_v_bf16mf2_bf16mf2x2(vbfloat16mf2x2_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x3_t __riscv_vset_v_bf16mf2_bf16mf2x3(vbfloat16mf2x3_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x4_t __riscv_vset_v_bf16mf2_bf16mf2x4(vbfloat16mf2x4_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x5_t __riscv_vset_v_bf16mf2_bf16mf2x5(vbfloat16mf2x5_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x6_t __riscv_vset_v_bf16mf2_bf16mf2x6(vbfloat16mf2x6_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x7_t __riscv_vset_v_bf16mf2_bf16mf2x7(vbfloat16mf2x7_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x8_t __riscv_vset_v_bf16mf2_bf16mf2x8(vbfloat16mf2x8_t dest, + size_t index, + vbfloat16mf2_t value); +vbfloat16m1x2_t __riscv_vset_v_bf16m1_bf16m1x2(vbfloat16m1x2_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x3_t __riscv_vset_v_bf16m1_bf16m1x3(vbfloat16m1x3_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x4_t __riscv_vset_v_bf16m1_bf16m1x4(vbfloat16m1x4_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x5_t __riscv_vset_v_bf16m1_bf16m1x5(vbfloat16m1x5_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x6_t __riscv_vset_v_bf16m1_bf16m1x6(vbfloat16m1x6_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x7_t __riscv_vset_v_bf16m1_bf16m1x7(vbfloat16m1x7_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m1x8_t __riscv_vset_v_bf16m1_bf16m1x8(vbfloat16m1x8_t dest, + size_t index, + vbfloat16m1_t value); +vbfloat16m2x2_t __riscv_vset_v_bf16m2_bf16m2x2(vbfloat16m2x2_t dest, + size_t index, + vbfloat16m2_t value); +vbfloat16m2x3_t __riscv_vset_v_bf16m2_bf16m2x3(vbfloat16m2x3_t dest, + size_t index, + vbfloat16m2_t value); +vbfloat16m2x4_t __riscv_vset_v_bf16m2_bf16m2x4(vbfloat16m2x4_t dest, + size_t index, + vbfloat16m2_t value); +vbfloat16m4x2_t __riscv_vset_v_bf16m4_bf16m4x2(vbfloat16m4x2_t dest, + size_t index, + vbfloat16m4_t value); +---- + +[[vector-extraction]] +==== Vector Extraction Intrinsics + +[,c] +---- +vbfloat16m1_t __riscv_vget_v_bf16m2_bf16m1(vbfloat16m2_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m4_bf16m1(vbfloat16m4_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m8_bf16m1(vbfloat16m8_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m4_bf16m2(vbfloat16m4_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m8_bf16m2(vbfloat16m8_t src, size_t index); +vbfloat16m4_t __riscv_vget_v_bf16m8_bf16m4(vbfloat16m8_t src, size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x2_bf16mf4(vbfloat16mf4x2_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x3_bf16mf4(vbfloat16mf4x3_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x4_bf16mf4(vbfloat16mf4x4_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x5_bf16mf4(vbfloat16mf4x5_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x6_bf16mf4(vbfloat16mf4x6_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x7_bf16mf4(vbfloat16mf4x7_t src, + size_t index); +vbfloat16mf4_t __riscv_vget_v_bf16mf4x8_bf16mf4(vbfloat16mf4x8_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x2_bf16mf2(vbfloat16mf2x2_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x3_bf16mf2(vbfloat16mf2x3_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x4_bf16mf2(vbfloat16mf2x4_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x5_bf16mf2(vbfloat16mf2x5_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x6_bf16mf2(vbfloat16mf2x6_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x7_bf16mf2(vbfloat16mf2x7_t src, + size_t index); +vbfloat16mf2_t __riscv_vget_v_bf16mf2x8_bf16mf2(vbfloat16mf2x8_t src, + size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x2_bf16m1(vbfloat16m1x2_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x3_bf16m1(vbfloat16m1x3_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x4_bf16m1(vbfloat16m1x4_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x5_bf16m1(vbfloat16m1x5_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x6_bf16m1(vbfloat16m1x6_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x7_bf16m1(vbfloat16m1x7_t src, size_t index); +vbfloat16m1_t __riscv_vget_v_bf16m1x8_bf16m1(vbfloat16m1x8_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m2x2_bf16m2(vbfloat16m2x2_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m2x3_bf16m2(vbfloat16m2x3_t src, size_t index); +vbfloat16m2_t __riscv_vget_v_bf16m2x4_bf16m2(vbfloat16m2x4_t src, size_t index); +vbfloat16m4_t __riscv_vget_v_bf16m4x2_bf16m4(vbfloat16m4x2_t src, size_t index); +---- + +[[vector-creation]] +==== Vector Creation Intrinsics + +[,c] +---- +vbfloat16m2_t __riscv_vcreate_v_bf16m1_bf16m2(vbfloat16m1_t v0, + vbfloat16m1_t v1); +vbfloat16m4_t __riscv_vcreate_v_bf16m1_bf16m4(vbfloat16m1_t v0, + vbfloat16m1_t v1, + vbfloat16m1_t v2, + vbfloat16m1_t v3); +vbfloat16m8_t __riscv_vcreate_v_bf16m1_bf16m8( + vbfloat16m1_t v0, vbfloat16m1_t v1, vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, vbfloat16m1_t v6, vbfloat16m1_t v7); +vbfloat16m4_t __riscv_vcreate_v_bf16m2_bf16m4(vbfloat16m2_t v0, + vbfloat16m2_t v1); +vbfloat16m8_t __riscv_vcreate_v_bf16m2_bf16m8(vbfloat16m2_t v0, + vbfloat16m2_t v1, + vbfloat16m2_t v2, + vbfloat16m2_t v3); +vbfloat16m8_t __riscv_vcreate_v_bf16m4_bf16m8(vbfloat16m4_t v0, + vbfloat16m4_t v1); +vbfloat16mf4x2_t __riscv_vcreate_v_bf16mf4x2(vbfloat16mf4_t v0, + vbfloat16mf4_t v1); +vbfloat16mf4x3_t __riscv_vcreate_v_bf16mf4x3(vbfloat16mf4_t v0, + vbfloat16mf4_t v1, + vbfloat16mf4_t v2); +vbfloat16mf4x4_t __riscv_vcreate_v_bf16mf4x4(vbfloat16mf4_t v0, + vbfloat16mf4_t v1, + vbfloat16mf4_t v2, + vbfloat16mf4_t v3); +vbfloat16mf4x5_t __riscv_vcreate_v_bf16mf4x5(vbfloat16mf4_t v0, + vbfloat16mf4_t v1, + vbfloat16mf4_t v2, + vbfloat16mf4_t v3, + vbfloat16mf4_t v4); +vbfloat16mf4x6_t +__riscv_vcreate_v_bf16mf4x6(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5); +vbfloat16mf4x7_t __riscv_vcreate_v_bf16mf4x7( + vbfloat16mf4_t v0, vbfloat16mf4_t v1, vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5, vbfloat16mf4_t v6); +vbfloat16mf4x8_t __riscv_vcreate_v_bf16mf4x8( + vbfloat16mf4_t v0, vbfloat16mf4_t v1, vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5, vbfloat16mf4_t v6, vbfloat16mf4_t v7); +vbfloat16mf2x2_t __riscv_vcreate_v_bf16mf2x2(vbfloat16mf2_t v0, + vbfloat16mf2_t v1); +vbfloat16mf2x3_t __riscv_vcreate_v_bf16mf2x3(vbfloat16mf2_t v0, + vbfloat16mf2_t v1, + vbfloat16mf2_t v2); +vbfloat16mf2x4_t __riscv_vcreate_v_bf16mf2x4(vbfloat16mf2_t v0, + vbfloat16mf2_t v1, + vbfloat16mf2_t v2, + vbfloat16mf2_t v3); +vbfloat16mf2x5_t __riscv_vcreate_v_bf16mf2x5(vbfloat16mf2_t v0, + vbfloat16mf2_t v1, + vbfloat16mf2_t v2, + vbfloat16mf2_t v3, + vbfloat16mf2_t v4); +vbfloat16mf2x6_t +__riscv_vcreate_v_bf16mf2x6(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5); +vbfloat16mf2x7_t __riscv_vcreate_v_bf16mf2x7( + vbfloat16mf2_t v0, vbfloat16mf2_t v1, vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5, vbfloat16mf2_t v6); +vbfloat16mf2x8_t __riscv_vcreate_v_bf16mf2x8( + vbfloat16mf2_t v0, vbfloat16mf2_t v1, vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5, vbfloat16mf2_t v6, vbfloat16mf2_t v7); +vbfloat16m1x2_t __riscv_vcreate_v_bf16m1x2(vbfloat16m1_t v0, vbfloat16m1_t v1); +vbfloat16m1x3_t __riscv_vcreate_v_bf16m1x3(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2); +vbfloat16m1x4_t __riscv_vcreate_v_bf16m1x4(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3); +vbfloat16m1x5_t __riscv_vcreate_v_bf16m1x5(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4); +vbfloat16m1x6_t __riscv_vcreate_v_bf16m1x6(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5); +vbfloat16m1x7_t __riscv_vcreate_v_bf16m1x7(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6); +vbfloat16m1x8_t __riscv_vcreate_v_bf16m1x8(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6, vbfloat16m1_t v7); +vbfloat16m2x2_t __riscv_vcreate_v_bf16m2x2(vbfloat16m2_t v0, vbfloat16m2_t v1); +vbfloat16m2x3_t __riscv_vcreate_v_bf16m2x3(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2); +vbfloat16m2x4_t __riscv_vcreate_v_bf16m2x4(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2, vbfloat16m2_t v3); +vbfloat16m4x2_t __riscv_vcreate_v_bf16m4x2(vbfloat16m4_t v0, vbfloat16m4_t v1); +---- diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc new file mode 100644 index 000000000..c00a11ebb --- /dev/null +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc @@ -0,0 +1,1145 @@ + +=== BFloat16 Vector Loads and Stores Intrinsics + +[[overloaded-bf16-vector-unit-stride-load]] +==== Vector Unit-Stride Load Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4_t __riscv_vle16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16(vbool8_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16(vbool4_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16(vbool2_t vm, const __bf16 *rs1, size_t vl); +---- + +[[overloaded-bf16-vector-unit-stride-store]] +==== Vector Unit-Stride Store Intrinsics + +[,c] +---- +void __riscv_vse16(__bf16 *rs1, vbfloat16mf4_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16mf2_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16m1_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16m2_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16m4_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16m8_t vs3, size_t vl); +// masked functions +void __riscv_vse16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4_t vs3, size_t vl); +void __riscv_vse16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2_t vs3, size_t vl); +void __riscv_vse16(vbool16_t vm, __bf16 *rs1, vbfloat16m1_t vs3, size_t vl); +void __riscv_vse16(vbool8_t vm, __bf16 *rs1, vbfloat16m2_t vs3, size_t vl); +void __riscv_vse16(vbool4_t vm, __bf16 *rs1, vbfloat16m4_t vs3, size_t vl); +void __riscv_vse16(vbool2_t vm, __bf16 *rs1, vbfloat16m8_t vs3, size_t vl); +---- + +[[overloaded-vector-strided-load]] +==== Vector Strided Load Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4_t __riscv_vlse16(vbool64_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16(vbool32_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16(vbool16_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16(vbool8_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16(vbool4_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16(vbool2_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +---- + +[[overloaded-vector-strided-store]] +==== Vector Strided Store Intrinsics + +[,c] +---- +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, size_t vl); +// masked functions +void __riscv_vsse16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsse16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsse16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsse16(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsse16(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsse16(vbool2_t vm, __bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, + size_t vl); +---- + +[[overloaded-vector-indexed-load]] +==== Vector Indexed Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vloxei16(const __bf16 *rs1, vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16(const __bf16 *rs1, vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16(const __bf16 *rs1, vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vloxei16(const __bf16 *rs1, vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vloxei16(const __bf16 *rs1, vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vloxei16(const __bf16 *rs1, vuint16m8_t rs2, size_t vl); +vbfloat16mf4_t __riscv_vluxei16(const __bf16 *rs1, vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16(const __bf16 *rs1, vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16(const __bf16 *rs1, vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vluxei16(const __bf16 *rs1, vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vluxei16(const __bf16 *rs1, vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vluxei16(const __bf16 *rs1, vuint16m8_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16(vbool16_t vm, const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16(vbool8_t vm, const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16(vbool4_t vm, const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16(vbool2_t vm, const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16(vbool16_t vm, const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16(vbool8_t vm, const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16(vbool4_t vm, const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16(vbool2_t vm, const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +---- + +[[overloaded-vector-indexed-store]] +==== Vector Indexed Store Intrinsics + +[,c] +---- +void __riscv_vsoxei16(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl); +// masked functions +void __riscv_vsoxei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsoxei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsoxei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsoxei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsoxei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsoxei16(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl); +void __riscv_vsuxei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsuxei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsuxei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsuxei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsuxei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsuxei16(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl); +---- + +[[overloaded-unit-stride-fault-only-first-loads]] +==== Unit-stride Fault-Only-First Loads Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4_t __riscv_vle16ff(vbool64_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff(vbool32_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff(vbool16_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff(vbool8_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff(vbool4_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff(vbool2_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +---- + +=== BFloat16 Vector Loads and Stores Segment Intrinsics + +[[overloaded-vector-unit-stride-segment-load]] +==== Vector Unit-Stride Segment Load Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16(vbool8_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16(vbool8_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16(vbool8_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16(vbool4_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +---- + +[[overloaded-vecrtor-unit-stride-segment-store]] +==== Vector Unit-Stride Segment Store Intrinsics + +[,c] +---- +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsseg3e16(__bf16 *rs1, vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsseg4e16(__bf16 *rs1, vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsseg5e16(__bf16 *rs1, vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsseg6e16(__bf16 *rs1, vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsseg7e16(__bf16 *rs1, vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsseg8e16(__bf16 *rs1, vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsseg3e16(__bf16 *rs1, vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsseg4e16(__bf16 *rs1, vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsseg5e16(__bf16 *rs1, vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsseg6e16(__bf16 *rs1, vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsseg7e16(__bf16 *rs1, vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsseg8e16(__bf16 *rs1, vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsseg3e16(__bf16 *rs1, vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsseg4e16(__bf16 *rs1, vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsseg5e16(__bf16 *rs1, vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsseg6e16(__bf16 *rs1, vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsseg7e16(__bf16 *rs1, vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsseg8e16(__bf16 *rs1, vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsseg3e16(__bf16 *rs1, vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsseg4e16(__bf16 *rs1, vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16m4x2_t vs3, size_t vl); +// masked functions +void __riscv_vsseg2e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsseg3e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsseg4e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsseg5e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsseg6e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsseg7e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsseg8e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsseg2e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsseg3e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsseg4e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsseg5e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsseg6e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsseg7e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsseg8e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsseg2e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsseg3e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsseg4e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsseg5e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsseg6e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsseg7e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsseg8e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsseg2e16(vbool8_t vm, __bf16 *rs1, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsseg3e16(vbool8_t vm, __bf16 *rs1, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsseg4e16(vbool8_t vm, __bf16 *rs1, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsseg2e16(vbool4_t vm, __bf16 *rs1, vbfloat16m4x2_t vs3, + size_t vl); +---- + +[[overloaded-vector-strided-segment-load]] +==== Vector Strided Segment Load Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +---- + +[[overloaded-vector-strided-segment-store]] +==== Vector Strided Segment Store Intrinsics + +[,c] +---- +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vssseg3e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vssseg4e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vssseg5e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vssseg6e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vssseg7e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vssseg8e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vssseg3e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vssseg4e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vssseg5e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vssseg6e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vssseg7e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vssseg8e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vssseg3e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vssseg4e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vssseg5e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vssseg6e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vssseg7e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vssseg8e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vssseg3e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vssseg4e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4x2_t vs3, + size_t vl); +// masked functions +void __riscv_vssseg2e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vssseg3e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vssseg4e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vssseg5e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vssseg6e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vssseg7e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vssseg8e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vssseg2e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vssseg3e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vssseg4e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vssseg5e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vssseg6e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vssseg7e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vssseg8e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vssseg2e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vssseg3e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vssseg4e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vssseg5e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vssseg6e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vssseg7e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vssseg8e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vssseg2e16(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vssseg3e16(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vssseg4e16(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vssseg2e16(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl); +---- + +[[overloaded-vector-indexed-segment-load]] +==== Vector Indexed Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +---- + +[[overloaded-vector-indexed-segment-store]] +==== Vector Indexed Segment Store Intrinsics + +[,c] +---- +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16m4_t vs2, vbfloat16m4x2_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16m4_t vs2, vbfloat16m4x2_t vs3, + size_t vl); +// masked functions +void __riscv_vsoxseg2ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsoxseg2ei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl); +---- + +=== BFloat16 Miscellaneous Vector Utility Intrinsics + +[[overloaded-reinterpret-cast-conversion]] +==== Reinterpret Cast Conversion Intrinsics + +[,c] +---- +// Reinterpret between different type under the same SEW/LMUL +vbfloat16mf4_t __riscv_vreinterpret_bf16mf4(vint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_bf16mf2(vint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_bf16m1(vint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_bf16m2(vint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_bf16m4(vint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_bf16m8(vint16m8_t src); +vbfloat16mf4_t __riscv_vreinterpret_bf16mf4(vuint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_bf16mf2(vuint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_bf16m1(vuint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_bf16m2(vuint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_bf16m4(vuint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_bf16m8(vuint16m8_t src); +vint16mf4_t __riscv_vreinterpret_i16mf4(vbfloat16mf4_t src); +vint16mf2_t __riscv_vreinterpret_i16mf2(vbfloat16mf2_t src); +vint16m1_t __riscv_vreinterpret_i16m1(vbfloat16m1_t src); +vint16m2_t __riscv_vreinterpret_i16m2(vbfloat16m2_t src); +vint16m4_t __riscv_vreinterpret_i16m4(vbfloat16m4_t src); +vint16m8_t __riscv_vreinterpret_i16m8(vbfloat16m8_t src); +vuint16mf4_t __riscv_vreinterpret_ui16mf4(vbfloat16mf4_t src); +vuint16mf2_t __riscv_vreinterpret_ui16mf2(vbfloat16mf2_t src); +vuint16m1_t __riscv_vreinterpret_ui16m1(vbfloat16m1_t src); +vuint16m2_t __riscv_vreinterpret_ui16m2(vbfloat16m2_t src); +vuint16m4_t __riscv_vreinterpret_ui16m4(vbfloat16m4_t src); +vuint16m8_t __riscv_vreinterpret_ui16m8(vbfloat16m8_t src); +---- + +[[overloaded-vector-lmul-extensionn]] +==== Vector LMUL Extension Intrinsics + +[,c] +---- +vbfloat16mf2_t __riscv_vlmul_ext_b16mf2(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_b16m1(vbfloat16mf4_t value); +vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16mf4_t value); +vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16mf4_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_b16m1(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16mf2_t value); +vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16mf2_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16m1_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m4_t value); +---- + +[[overloaded-vector-lmul-truncation]] +==== Vector LMUL Truncation Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16mf2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m1_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m1_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m2_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m2_t value); +vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m4_t value); +vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m4_t value); +vbfloat16m2_t __riscv_vlmul_trunc_b16m2(vbfloat16m4_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m8_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m8_t value); +vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m8_t value); +vbfloat16m2_t __riscv_vlmul_trunc_b16m2(vbfloat16m8_t value); +vbfloat16m4_t __riscv_vlmul_trunc_b16m4(vbfloat16m8_t value); +---- + +[[overloaded-vector-initialization]] +==== Vector Initialization Intrinsics +Intrinsics here don't have an overloaded variant. + +[[overloaded-vector-insertion]] +==== Vector Insertion Intrinsics + +[,c] +---- +vbfloat16m2_t __riscv_vset(vbfloat16m2_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m4_t __riscv_vset(vbfloat16m4_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m4_t __riscv_vset(vbfloat16m4_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m8_t __riscv_vset(vbfloat16m8_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m8_t __riscv_vset(vbfloat16m8_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m8_t __riscv_vset(vbfloat16m8_t dest, size_t index, + vbfloat16m4_t value); +vbfloat16mf4x2_t __riscv_vset(vbfloat16mf4x2_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x3_t __riscv_vset(vbfloat16mf4x3_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x4_t __riscv_vset(vbfloat16mf4x4_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x5_t __riscv_vset(vbfloat16mf4x5_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x6_t __riscv_vset(vbfloat16mf4x6_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x7_t __riscv_vset(vbfloat16mf4x7_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x8_t __riscv_vset(vbfloat16mf4x8_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf2x2_t __riscv_vset(vbfloat16mf2x2_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x3_t __riscv_vset(vbfloat16mf2x3_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x4_t __riscv_vset(vbfloat16mf2x4_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x5_t __riscv_vset(vbfloat16mf2x5_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x6_t __riscv_vset(vbfloat16mf2x6_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x7_t __riscv_vset(vbfloat16mf2x7_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x8_t __riscv_vset(vbfloat16mf2x8_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16m1x2_t __riscv_vset(vbfloat16m1x2_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x3_t __riscv_vset(vbfloat16m1x3_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x4_t __riscv_vset(vbfloat16m1x4_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x5_t __riscv_vset(vbfloat16m1x5_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x6_t __riscv_vset(vbfloat16m1x6_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x7_t __riscv_vset(vbfloat16m1x7_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x8_t __riscv_vset(vbfloat16m1x8_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m2x2_t __riscv_vset(vbfloat16m2x2_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m2x3_t __riscv_vset(vbfloat16m2x3_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m2x4_t __riscv_vset(vbfloat16m2x4_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m4x2_t __riscv_vset(vbfloat16m4x2_t dest, size_t index, + vbfloat16m4_t value); +---- + +[[overloaded-vector-extraction]] +==== Vector Extraction Intrinsics + +[,c] +---- +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m2_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m4_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m8_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m4_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m8_t src, size_t index); +vbfloat16m4_t __riscv_vget_bf16m4(vbfloat16m8_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x2_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x3_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x4_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x5_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x6_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x7_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x8_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x2_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x3_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x4_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x5_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x6_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x7_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x8_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x2_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x3_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x4_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x5_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x6_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x7_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x8_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m2x2_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m2x3_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m2x4_t src, size_t index); +vbfloat16m4_t __riscv_vget_bf16m4(vbfloat16m4x2_t src, size_t index); +---- + +[[overloaded-vector-creation]] +==== Vector Creation Intrinsics +Intrinsics here don't have an overloaded variant. diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc new file mode 100644 index 000000000..67cac3d50 --- /dev/null +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc @@ -0,0 +1,202 @@ + +=== BFloat16 Vector Loads and Stores Intrinsics + +[[overloaded-bf16-vector-unit-stride-load]] +==== Vector Unit-Stride Load Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4_t __riscv_vle16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16(vbool8_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16(vbool4_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16(vbool2_t vm, const __bf16 *rs1, size_t vl); +---- + +[[overloaded-bf16-vector-unit-stride-store]] +==== Vector Unit-Stride Store Intrinsics + +[,c] +---- +void __riscv_vse16(__bf16 *rs1, vbfloat16mf4_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16mf2_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16m1_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16m2_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16m4_t vs3, size_t vl); +void __riscv_vse16(__bf16 *rs1, vbfloat16m8_t vs3, size_t vl); +// masked functions +void __riscv_vse16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4_t vs3, size_t vl); +void __riscv_vse16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2_t vs3, size_t vl); +void __riscv_vse16(vbool16_t vm, __bf16 *rs1, vbfloat16m1_t vs3, size_t vl); +void __riscv_vse16(vbool8_t vm, __bf16 *rs1, vbfloat16m2_t vs3, size_t vl); +void __riscv_vse16(vbool4_t vm, __bf16 *rs1, vbfloat16m4_t vs3, size_t vl); +void __riscv_vse16(vbool2_t vm, __bf16 *rs1, vbfloat16m8_t vs3, size_t vl); +---- + +[[overloaded-vector-strided-load]] +==== Vector Strided Load Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4_t __riscv_vlse16(vbool64_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16(vbool32_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16(vbool16_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16(vbool8_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16(vbool4_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16(vbool2_t vm, const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +---- + +[[overloaded-vector-strided-store]] +==== Vector Strided Store Intrinsics + +[,c] +---- +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, size_t vl); +void __riscv_vsse16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, size_t vl); +// masked functions +void __riscv_vsse16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsse16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsse16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsse16(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsse16(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsse16(vbool2_t vm, __bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, + size_t vl); +---- + +[[overloaded-vector-indexed-load]] +==== Vector Indexed Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vloxei16(const __bf16 *rs1, vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16(const __bf16 *rs1, vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16(const __bf16 *rs1, vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vloxei16(const __bf16 *rs1, vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vloxei16(const __bf16 *rs1, vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vloxei16(const __bf16 *rs1, vuint16m8_t rs2, size_t vl); +vbfloat16mf4_t __riscv_vluxei16(const __bf16 *rs1, vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16(const __bf16 *rs1, vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16(const __bf16 *rs1, vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vluxei16(const __bf16 *rs1, vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vluxei16(const __bf16 *rs1, vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vluxei16(const __bf16 *rs1, vuint16m8_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16(vbool16_t vm, const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16(vbool8_t vm, const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16(vbool4_t vm, const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16(vbool2_t vm, const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16(vbool16_t vm, const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16(vbool8_t vm, const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16(vbool4_t vm, const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16(vbool2_t vm, const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +---- + +[[overloaded-vector-indexed-store]] +==== Vector Indexed Store Intrinsics + +[,c] +---- +void __riscv_vsoxei16(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsoxei16(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl); +void __riscv_vsuxei16(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl); +// masked functions +void __riscv_vsoxei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsoxei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsoxei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsoxei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsoxei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsoxei16(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl); +void __riscv_vsuxei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl); +void __riscv_vsuxei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl); +void __riscv_vsuxei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl); +void __riscv_vsuxei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl); +void __riscv_vsuxei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl); +void __riscv_vsuxei16(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl); +---- + +[[overloaded-unit-stride-fault-only-first-loads]] +==== Unit-stride Fault-Only-First Loads Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4_t __riscv_vle16ff(vbool64_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff(vbool32_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff(vbool16_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff(vbool8_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff(vbool4_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff(vbool2_t vm, const __bf16 *rs1, size_t *new_vl, + size_t vl); +---- diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc new file mode 100644 index 000000000..06d4d0a39 --- /dev/null +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc @@ -0,0 +1,750 @@ + +=== BFloat16 Vector Loads and Stores Segment Intrinsics + +[[overloaded-vector-unit-stride-segment-load]] +==== Vector Unit-Stride Segment Load Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16(vbool64_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16(vbool32_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16(vbool16_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16(vbool8_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16(vbool8_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16(vbool8_t vm, const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16(vbool4_t vm, const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl); +---- + +[[overloaded-vecrtor-unit-stride-segment-store]] +==== Vector Unit-Stride Segment Store Intrinsics + +[,c] +---- +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsseg3e16(__bf16 *rs1, vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsseg4e16(__bf16 *rs1, vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsseg5e16(__bf16 *rs1, vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsseg6e16(__bf16 *rs1, vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsseg7e16(__bf16 *rs1, vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsseg8e16(__bf16 *rs1, vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsseg3e16(__bf16 *rs1, vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsseg4e16(__bf16 *rs1, vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsseg5e16(__bf16 *rs1, vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsseg6e16(__bf16 *rs1, vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsseg7e16(__bf16 *rs1, vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsseg8e16(__bf16 *rs1, vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsseg3e16(__bf16 *rs1, vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsseg4e16(__bf16 *rs1, vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsseg5e16(__bf16 *rs1, vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsseg6e16(__bf16 *rs1, vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsseg7e16(__bf16 *rs1, vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsseg8e16(__bf16 *rs1, vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsseg3e16(__bf16 *rs1, vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsseg4e16(__bf16 *rs1, vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsseg2e16(__bf16 *rs1, vbfloat16m4x2_t vs3, size_t vl); +// masked functions +void __riscv_vsseg2e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsseg3e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsseg4e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsseg5e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsseg6e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsseg7e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsseg8e16(vbool64_t vm, __bf16 *rs1, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsseg2e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsseg3e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsseg4e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsseg5e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsseg6e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsseg7e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsseg8e16(vbool32_t vm, __bf16 *rs1, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsseg2e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsseg3e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsseg4e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsseg5e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsseg6e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsseg7e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsseg8e16(vbool16_t vm, __bf16 *rs1, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsseg2e16(vbool8_t vm, __bf16 *rs1, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsseg3e16(vbool8_t vm, __bf16 *rs1, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsseg4e16(vbool8_t vm, __bf16 *rs1, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsseg2e16(vbool4_t vm, __bf16 *rs1, vbfloat16m4x2_t vs3, + size_t vl); +---- + +[[overloaded-vector-strided-segment-load]] +==== Vector Strided Segment Load Intrinsics + +[,c] +---- +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +---- + +[[overloaded-vector-strided-segment-store]] +==== Vector Strided Segment Store Intrinsics + +[,c] +---- +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vssseg3e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vssseg4e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vssseg5e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vssseg6e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vssseg7e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vssseg8e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vssseg3e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vssseg4e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vssseg5e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vssseg6e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vssseg7e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vssseg8e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vssseg3e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vssseg4e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vssseg5e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vssseg6e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vssseg7e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vssseg8e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vssseg3e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vssseg4e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vssseg2e16(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4x2_t vs3, + size_t vl); +// masked functions +void __riscv_vssseg2e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vssseg3e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vssseg4e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vssseg5e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vssseg6e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vssseg7e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vssseg8e16(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vssseg2e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vssseg3e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vssseg4e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vssseg5e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vssseg6e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vssseg7e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vssseg8e16(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vssseg2e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vssseg3e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vssseg4e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vssseg5e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vssseg6e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vssseg7e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vssseg8e16(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vssseg2e16(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vssseg3e16(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vssseg4e16(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vssseg2e16(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl); +---- + +[[overloaded-vector-indexed-segment-load]] +==== Vector Indexed Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +---- + +[[overloaded-vector-indexed-segment-store]] +==== Vector Indexed Segment Store Intrinsics + +[,c] +---- +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsoxseg5ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsoxseg6ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsoxseg7ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsoxseg8ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsoxseg3ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsoxseg4ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsoxseg2ei16(__bf16 *rs1, vuint16m4_t vs2, vbfloat16m4x2_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16(__bf16 *rs1, vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16(__bf16 *rs1, vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x4_t vs3, + size_t vl); +void __riscv_vsuxseg5ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x5_t vs3, + size_t vl); +void __riscv_vsuxseg6ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x6_t vs3, + size_t vl); +void __riscv_vsuxseg7ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x7_t vs3, + size_t vl); +void __riscv_vsuxseg8ei16(__bf16 *rs1, vuint16m1_t vs2, vbfloat16m1x8_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x2_t vs3, + size_t vl); +void __riscv_vsuxseg3ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x3_t vs3, + size_t vl); +void __riscv_vsuxseg4ei16(__bf16 *rs1, vuint16m2_t vs2, vbfloat16m2x4_t vs3, + size_t vl); +void __riscv_vsuxseg2ei16(__bf16 *rs1, vuint16m4_t vs2, vbfloat16m4x2_t vs3, + size_t vl); +// masked functions +void __riscv_vsoxseg2ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsoxseg5ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsoxseg6ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsoxseg7ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsoxseg8ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsoxseg2ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsoxseg3ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsoxseg4ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsoxseg2ei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16(vbool64_t vm, __bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16(vbool32_t vm, __bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl); +void __riscv_vsuxseg5ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl); +void __riscv_vsuxseg6ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl); +void __riscv_vsuxseg7ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl); +void __riscv_vsuxseg8ei16(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl); +void __riscv_vsuxseg3ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl); +void __riscv_vsuxseg4ei16(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl); +void __riscv_vsuxseg2ei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl); +---- diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc new file mode 100644 index 000000000..e0557f220 --- /dev/null +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc @@ -0,0 +1,193 @@ + +=== BFloat16 Miscellaneous Vector Utility Intrinsics + +[[overloaded-reinterpret-cast-conversion]] +==== Reinterpret Cast Conversion Intrinsics + +[,c] +---- +// Reinterpret between different type under the same SEW/LMUL +vbfloat16mf4_t __riscv_vreinterpret_bf16mf4(vint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_bf16mf2(vint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_bf16m1(vint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_bf16m2(vint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_bf16m4(vint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_bf16m8(vint16m8_t src); +vbfloat16mf4_t __riscv_vreinterpret_bf16mf4(vuint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_bf16mf2(vuint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_bf16m1(vuint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_bf16m2(vuint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_bf16m4(vuint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_bf16m8(vuint16m8_t src); +vint16mf4_t __riscv_vreinterpret_i16mf4(vbfloat16mf4_t src); +vint16mf2_t __riscv_vreinterpret_i16mf2(vbfloat16mf2_t src); +vint16m1_t __riscv_vreinterpret_i16m1(vbfloat16m1_t src); +vint16m2_t __riscv_vreinterpret_i16m2(vbfloat16m2_t src); +vint16m4_t __riscv_vreinterpret_i16m4(vbfloat16m4_t src); +vint16m8_t __riscv_vreinterpret_i16m8(vbfloat16m8_t src); +vuint16mf4_t __riscv_vreinterpret_ui16mf4(vbfloat16mf4_t src); +vuint16mf2_t __riscv_vreinterpret_ui16mf2(vbfloat16mf2_t src); +vuint16m1_t __riscv_vreinterpret_ui16m1(vbfloat16m1_t src); +vuint16m2_t __riscv_vreinterpret_ui16m2(vbfloat16m2_t src); +vuint16m4_t __riscv_vreinterpret_ui16m4(vbfloat16m4_t src); +vuint16m8_t __riscv_vreinterpret_ui16m8(vbfloat16m8_t src); +---- + +[[overloaded-vector-lmul-extensionn]] +==== Vector LMUL Extension Intrinsics + +[,c] +---- +vbfloat16mf2_t __riscv_vlmul_ext_b16mf2(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_b16m1(vbfloat16mf4_t value); +vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16mf4_t value); +vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16mf4_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_b16m1(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16mf2_t value); +vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16mf2_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16m1_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m4_t value); +---- + +[[overloaded-vector-lmul-truncation]] +==== Vector LMUL Truncation Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16mf2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m1_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m1_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m2_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m2_t value); +vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m4_t value); +vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m4_t value); +vbfloat16m2_t __riscv_vlmul_trunc_b16m2(vbfloat16m4_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m8_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m8_t value); +vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m8_t value); +vbfloat16m2_t __riscv_vlmul_trunc_b16m2(vbfloat16m8_t value); +vbfloat16m4_t __riscv_vlmul_trunc_b16m4(vbfloat16m8_t value); +---- + +[[overloaded-vector-initialization]] +==== Vector Initialization Intrinsics +Intrinsics here don't have an overloaded variant. + +[[overloaded-vector-insertion]] +==== Vector Insertion Intrinsics + +[,c] +---- +vbfloat16m2_t __riscv_vset(vbfloat16m2_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m4_t __riscv_vset(vbfloat16m4_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m4_t __riscv_vset(vbfloat16m4_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m8_t __riscv_vset(vbfloat16m8_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m8_t __riscv_vset(vbfloat16m8_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m8_t __riscv_vset(vbfloat16m8_t dest, size_t index, + vbfloat16m4_t value); +vbfloat16mf4x2_t __riscv_vset(vbfloat16mf4x2_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x3_t __riscv_vset(vbfloat16mf4x3_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x4_t __riscv_vset(vbfloat16mf4x4_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x5_t __riscv_vset(vbfloat16mf4x5_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x6_t __riscv_vset(vbfloat16mf4x6_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x7_t __riscv_vset(vbfloat16mf4x7_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf4x8_t __riscv_vset(vbfloat16mf4x8_t dest, size_t index, + vbfloat16mf4_t value); +vbfloat16mf2x2_t __riscv_vset(vbfloat16mf2x2_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x3_t __riscv_vset(vbfloat16mf2x3_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x4_t __riscv_vset(vbfloat16mf2x4_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x5_t __riscv_vset(vbfloat16mf2x5_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x6_t __riscv_vset(vbfloat16mf2x6_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x7_t __riscv_vset(vbfloat16mf2x7_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16mf2x8_t __riscv_vset(vbfloat16mf2x8_t dest, size_t index, + vbfloat16mf2_t value); +vbfloat16m1x2_t __riscv_vset(vbfloat16m1x2_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x3_t __riscv_vset(vbfloat16m1x3_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x4_t __riscv_vset(vbfloat16m1x4_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x5_t __riscv_vset(vbfloat16m1x5_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x6_t __riscv_vset(vbfloat16m1x6_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x7_t __riscv_vset(vbfloat16m1x7_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m1x8_t __riscv_vset(vbfloat16m1x8_t dest, size_t index, + vbfloat16m1_t value); +vbfloat16m2x2_t __riscv_vset(vbfloat16m2x2_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m2x3_t __riscv_vset(vbfloat16m2x3_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m2x4_t __riscv_vset(vbfloat16m2x4_t dest, size_t index, + vbfloat16m2_t value); +vbfloat16m4x2_t __riscv_vset(vbfloat16m4x2_t dest, size_t index, + vbfloat16m4_t value); +---- + +[[overloaded-vector-extraction]] +==== Vector Extraction Intrinsics + +[,c] +---- +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m2_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m4_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m8_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m4_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m8_t src, size_t index); +vbfloat16m4_t __riscv_vget_bf16m4(vbfloat16m8_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x2_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x3_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x4_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x5_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x6_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x7_t src, size_t index); +vbfloat16mf4_t __riscv_vget_bf16mf4(vbfloat16mf4x8_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x2_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x3_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x4_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x5_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x6_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x7_t src, size_t index); +vbfloat16mf2_t __riscv_vget_bf16mf2(vbfloat16mf2x8_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x2_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x3_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x4_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x5_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x6_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x7_t src, size_t index); +vbfloat16m1_t __riscv_vget_bf16m1(vbfloat16m1x8_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m2x2_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m2x3_t src, size_t index); +vbfloat16m2_t __riscv_vget_bf16m2(vbfloat16m2x4_t src, size_t index); +vbfloat16m4_t __riscv_vget_bf16m4(vbfloat16m4x2_t src, size_t index); +---- + +[[overloaded-vector-creation]] +==== Vector Creation Intrinsics +Intrinsics here don't have an overloaded variant. diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc new file mode 100644 index 000000000..25c99db86 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc @@ -0,0 +1,2393 @@ + +=== BFloat16 Vector Loads and Stores Intrinsics + +[[policy-variant-bf16-vector-unit-stride-load]] +==== Vector Unit-Stride Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +---- + +[[policy-variant-bf16-vector-unit-stride-store]] +==== Vector Unit-Stride Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-strided-load]] +==== Vector Strided Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +---- + +[[policy-variant-vector-strided-store]] +==== Vector Strided Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-indexed-load]] +==== Vector Indexed Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +---- + +[[policy-variant-vector-indexed-store]] +==== Vector Indexed Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-unit-stride-fault-only-first-loads]] +==== Unit-stride Fault-Only-First Loads Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_tu(vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_tu(vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +---- + +=== BFloat16 Vector Loads and Stores Segment Intrinsics + +[[policy-variant-vector-unit-stride-segment-load]] +==== Vector Unit-Stride Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_mu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_mu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_mu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_mu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +---- + +[[policy-variant-vecrtor-unit-stride-segment-store]] +==== Vector Unit-Stride Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-strided-segment-load]] +==== Vector Strided Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_mu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_mu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_mu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_mu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +---- + +[[policy-variant-vector-strided-segment-store]] +==== Vector Strided Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-indexed-segment-load]] +==== Vector Indexed Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +---- + +[[policy-variant-vector-indexed-segment-store]] +==== Vector Indexed Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +=== BFloat16 Miscellaneous Vector Utility Intrinsics + +[[policy-variant-reinterpret-cast-conversion]] +==== Reinterpret Cast Conversion Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-lmul-extensionn]] +==== Vector LMUL Extension Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-lmul-truncation]] +==== Vector LMUL Truncation Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-initialization]] +==== Vector Initialization Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-insertion]] +==== Vector Insertion Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-extraction]] +==== Vector Extraction Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-creation]] +==== Vector Creation Intrinsics +Intrinsics here don't have a policy variant. diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc new file mode 100644 index 000000000..7d99fcc30 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc @@ -0,0 +1,372 @@ + +=== BFloat16 Vector Loads and Stores Intrinsics + +[[policy-variant-bf16-vector-unit-stride-load]] +==== Vector Unit-Stride Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +---- + +[[policy-variant-bf16-vector-unit-stride-store]] +==== Vector Unit-Stride Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-strided-load]] +==== Vector Strided Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vlse16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vlse16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vlse16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vlse16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vlse16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +---- + +[[policy-variant-vector-strided-store]] +==== Vector Strided Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-indexed-load]] +==== Vector Indexed Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +---- + +[[policy-variant-vector-indexed-store]] +==== Vector Indexed Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-unit-stride-fault-only-first-loads]] +==== Unit-stride Fault-Only-First Loads Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_tu(vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_tu(vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +---- diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc new file mode 100644 index 000000000..f67848b46 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc @@ -0,0 +1,1991 @@ + +=== BFloat16 Vector Loads and Stores Segment Intrinsics + +[[policy-variant-vector-unit-stride-segment-load]] +==== Vector Unit-Stride Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_v_bf16m2x2_mu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_v_bf16m2x3_mu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_v_bf16m2x4_mu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_v_bf16m4x2_mu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl); +---- + +[[policy-variant-vecrtor-unit-stride-segment-store]] +==== Vector Unit-Stride Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-strided-segment-load]] +==== Vector Strided Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_v_bf16m2x2_mu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_v_bf16m2x3_mu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_v_bf16m2x4_mu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_v_bf16m4x2_mu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +---- + +[[policy-variant-vector-strided-segment-store]] +==== Vector Strided Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-indexed-segment-load]] +==== Vector Indexed Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +---- + +[[policy-variant-vector-indexed-segment-store]] +==== Vector Indexed Segment Store Intrinsics +Intrinsics here don't have a policy variant. diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc new file mode 100644 index 000000000..363b02828 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc @@ -0,0 +1,30 @@ + +=== BFloat16 Miscellaneous Vector Utility Intrinsics + +[[policy-variant-reinterpret-cast-conversion]] +==== Reinterpret Cast Conversion Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-lmul-extensionn]] +==== Vector LMUL Extension Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-lmul-truncation]] +==== Vector LMUL Truncation Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-initialization]] +==== Vector Initialization Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-insertion]] +==== Vector Insertion Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-extraction]] +==== Vector Extraction Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-vector-creation]] +==== Vector Creation Intrinsics +Intrinsics here don't have a policy variant. diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc new file mode 100644 index 000000000..99bd83e3b --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc @@ -0,0 +1,1708 @@ + +=== BFloat16 Vector Loads and Stores Intrinsics + +[[policy-variant-overloadedbf16-vector-unit-stride-load]] +==== Vector Unit-Stride Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2_t __riscv_vle16_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1_t __riscv_vle16_tu(vbfloat16m1_t vd, const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_tu(vbfloat16m2_t vd, const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_tu(vbfloat16m4_t vd, const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_tu(vbfloat16m8_t vd, const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m4_t __riscv_vle16_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m8_t __riscv_vle16_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, + size_t vl); +---- + +[[policy-variant-overloadedbf16-vector-unit-stride-store]] +==== Vector Unit-Stride Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-strided-load]] +==== Vector Strided Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlse16_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_tu(vbfloat16m1_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_tu(vbfloat16m2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_tu(vbfloat16m4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_tu(vbfloat16m8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +---- + +[[policy-variant-overloadedvector-strided-store]] +==== Vector Strided Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-indexed-load]] +==== Vector Indexed Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vloxei16_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vloxei16_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vloxei16_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vloxei16_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +vbfloat16mf4_t __riscv_vluxei16_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vluxei16_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vluxei16_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vluxei16_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vloxei16_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vloxei16_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vluxei16_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vluxei16_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vloxei16_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vloxei16_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vluxei16_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vluxei16_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vloxei16_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vloxei16_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vluxei16_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vluxei16_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +---- + +[[policy-variant-overloadedvector-indexed-store]] +==== Vector Indexed Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedunit-stride-fault-only-first-loads]] +==== Unit-stride Fault-Only-First Loads Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16ff_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2_t __riscv_vle16ff_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1_t __riscv_vle16ff_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16mf2_t __riscv_vle16ff_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m1_t __riscv_vle16ff_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +---- + +=== BFloat16 Vector Loads and Stores Segment Intrinsics + +[[policy-variant-overloadedvector-unit-stride-segment-load]] +==== Vector Unit-Stride Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlseg2e16_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +---- + +[[policy-variant-overloadedvecrtor-unit-stride-segment-store]] +==== Vector Unit-Stride Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-strided-segment-load]] +==== Vector Strided Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlsseg2e16_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +---- + +[[policy-variant-overloadedvector-strided-segment-store]] +==== Vector Strided Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-indexed-segment-load]] +==== Vector Indexed Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vloxseg2ei16_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +---- + +[[policy-variant-overloadedvector-indexed-segment-store]] +==== Vector Indexed Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +=== BFloat16 Miscellaneous Vector Utility Intrinsics + +[[policy-variant-overloadedreinterpret-cast-conversion]] +==== Reinterpret Cast Conversion Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-lmul-extensionn]] +==== Vector LMUL Extension Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-lmul-truncation]] +==== Vector LMUL Truncation Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-initialization]] +==== Vector Initialization Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-insertion]] +==== Vector Insertion Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-extraction]] +==== Vector Extraction Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-creation]] +==== Vector Creation Intrinsics +Intrinsics here don't have a policy variant. diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc new file mode 100644 index 000000000..17fec1b34 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/00_bfloat16_vector_loads_and_stores_intrinsics.adoc @@ -0,0 +1,334 @@ + +=== BFloat16 Vector Loads and Stores Intrinsics + +[[policy-variant-overloadedbf16-vector-unit-stride-load]] +==== Vector Unit-Stride Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2_t __riscv_vle16_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1_t __riscv_vle16_tu(vbfloat16m1_t vd, const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_tu(vbfloat16m2_t vd, const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_tu(vbfloat16m4_t vd, const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_tu(vbfloat16m8_t vd, const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4_t __riscv_vle16_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m8_t __riscv_vle16_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2_t __riscv_vle16_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1_t __riscv_vle16_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2_t __riscv_vle16_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m4_t __riscv_vle16_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m8_t __riscv_vle16_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, + size_t vl); +---- + +[[policy-variant-overloadedbf16-vector-unit-stride-store]] +==== Vector Unit-Stride Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-strided-load]] +==== Vector Strided Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vlse16_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_tu(vbfloat16m1_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_tu(vbfloat16m2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_tu(vbfloat16m4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_tu(vbfloat16m8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vlse16_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vlse16_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m1_t __riscv_vlse16_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m2_t __riscv_vlse16_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m4_t __riscv_vlse16_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +vbfloat16m8_t __riscv_vlse16_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, size_t vl); +---- + +[[policy-variant-overloadedvector-strided-store]] +==== Vector Strided Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-indexed-load]] +==== Vector Indexed Load Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vloxei16_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vloxei16_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vloxei16_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vloxei16_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vloxei16_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vloxei16_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +vbfloat16mf4_t __riscv_vluxei16_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2_t __riscv_vluxei16_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1_t __riscv_vluxei16_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2_t __riscv_vluxei16_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4_t __riscv_vluxei16_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16m8_t __riscv_vluxei16_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vloxei16_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vloxei16_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vluxei16_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vluxei16_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vloxei16_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vloxei16_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vluxei16_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vluxei16_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vloxei16_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vloxei16_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vloxei16_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vloxei16_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vloxei16_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vloxei16_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +vbfloat16mf4_t __riscv_vluxei16_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2_t __riscv_vluxei16_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1_t __riscv_vluxei16_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2_t __riscv_vluxei16_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4_t __riscv_vluxei16_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16m8_t __riscv_vluxei16_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl); +---- + +[[policy-variant-overloadedvector-indexed-store]] +==== Vector Indexed Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedunit-stride-fault-only-first-loads]] +==== Unit-stride Fault-Only-First Loads Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vle16ff_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2_t __riscv_vle16ff_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1_t __riscv_vle16ff_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2_t __riscv_vle16ff_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1_t __riscv_vle16ff_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2_t __riscv_vle16ff_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4_t __riscv_vle16ff_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m8_t __riscv_vle16ff_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vle16ff_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16mf2_t __riscv_vle16ff_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m1_t __riscv_vle16ff_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m2_t __riscv_vle16ff_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m4_t __riscv_vle16ff_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +vbfloat16m8_t __riscv_vle16ff_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, size_t vl); +---- diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc new file mode 100644 index 000000000..507b4155e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/01_bfloat16_vector_loads_and_stores_segment_intrinsics.adoc @@ -0,0 +1,1344 @@ + +=== BFloat16 Vector Loads and Stores Segment Intrinsics + +[[policy-variant-overloadedvector-unit-stride-segment-load]] +==== Vector Unit-Stride Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlseg2e16_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlseg2e16_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl); +vbfloat16mf4x2_t __riscv_vlseg2e16ff_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x3_t __riscv_vlseg3e16ff_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x4_t __riscv_vlseg4e16ff_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x5_t __riscv_vlseg5e16ff_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x6_t __riscv_vlseg6e16ff_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x7_t __riscv_vlseg7e16ff_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf4x8_t __riscv_vlseg8e16ff_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x2_t __riscv_vlseg2e16ff_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x3_t __riscv_vlseg3e16ff_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x4_t __riscv_vlseg4e16ff_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x5_t __riscv_vlseg5e16ff_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x6_t __riscv_vlseg6e16ff_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x7_t __riscv_vlseg7e16ff_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16mf2x8_t __riscv_vlseg8e16ff_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x2_t __riscv_vlseg2e16ff_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x3_t __riscv_vlseg3e16ff_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x4_t __riscv_vlseg4e16ff_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x5_t __riscv_vlseg5e16ff_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x6_t __riscv_vlseg6e16ff_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x7_t __riscv_vlseg7e16ff_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m1x8_t __riscv_vlseg8e16ff_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x2_t __riscv_vlseg2e16ff_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x3_t __riscv_vlseg3e16ff_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m2x4_t __riscv_vlseg4e16ff_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +vbfloat16m4x2_t __riscv_vlseg2e16ff_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl); +---- + +[[policy-variant-overloadedvecrtor-unit-stride-segment-store]] +==== Vector Unit-Stride Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-strided-segment-load]] +==== Vector Strided Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vlsseg2e16_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vlsseg2e16_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vlsseg3e16_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vlsseg4e16_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vlsseg5e16_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vlsseg6e16_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vlsseg7e16_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vlsseg8e16_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vlsseg2e16_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vlsseg3e16_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vlsseg4e16_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vlsseg5e16_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vlsseg6e16_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vlsseg7e16_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vlsseg8e16_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vlsseg2e16_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vlsseg3e16_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vlsseg4e16_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vlsseg5e16_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vlsseg6e16_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vlsseg7e16_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vlsseg8e16_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vlsseg2e16_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vlsseg3e16_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vlsseg4e16_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vlsseg2e16_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl); +---- + +[[policy-variant-overloadedvector-strided-segment-store]] +==== Vector Strided Segment Store Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-indexed-segment-load]] +==== Vector Indexed Segment Load Intrinsics + +[,c] +---- +vbfloat16mf4x2_t __riscv_vloxseg2ei16_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_tum(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_tum(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_tum(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_tum(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_tum(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_tum(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_tum(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_tum(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_tum(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_tum(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_tum(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_tum(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_tum(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_tum(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +// masked functions +vbfloat16mf4x2_t __riscv_vloxseg2ei16_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vloxseg3ei16_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vloxseg4ei16_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vloxseg5ei16_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vloxseg6ei16_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vloxseg7ei16_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vloxseg8ei16_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vloxseg2ei16_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vloxseg3ei16_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vloxseg4ei16_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vloxseg5ei16_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vloxseg6ei16_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vloxseg7ei16_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vloxseg8ei16_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vloxseg2ei16_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vloxseg3ei16_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vloxseg4ei16_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vloxseg5ei16_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vloxseg6ei16_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vloxseg7ei16_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vloxseg8ei16_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vloxseg2ei16_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vloxseg3ei16_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vloxseg4ei16_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vloxseg2ei16_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +vbfloat16mf4x2_t __riscv_vluxseg2ei16_mu(vbool64_t vm, vbfloat16mf4x2_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x3_t __riscv_vluxseg3ei16_mu(vbool64_t vm, vbfloat16mf4x3_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x4_t __riscv_vluxseg4ei16_mu(vbool64_t vm, vbfloat16mf4x4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x5_t __riscv_vluxseg5ei16_mu(vbool64_t vm, vbfloat16mf4x5_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x6_t __riscv_vluxseg6ei16_mu(vbool64_t vm, vbfloat16mf4x6_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x7_t __riscv_vluxseg7ei16_mu(vbool64_t vm, vbfloat16mf4x7_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf4x8_t __riscv_vluxseg8ei16_mu(vbool64_t vm, vbfloat16mf4x8_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl); +vbfloat16mf2x2_t __riscv_vluxseg2ei16_mu(vbool32_t vm, vbfloat16mf2x2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x3_t __riscv_vluxseg3ei16_mu(vbool32_t vm, vbfloat16mf2x3_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x4_t __riscv_vluxseg4ei16_mu(vbool32_t vm, vbfloat16mf2x4_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x5_t __riscv_vluxseg5ei16_mu(vbool32_t vm, vbfloat16mf2x5_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x6_t __riscv_vluxseg6ei16_mu(vbool32_t vm, vbfloat16mf2x6_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x7_t __riscv_vluxseg7ei16_mu(vbool32_t vm, vbfloat16mf2x7_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16mf2x8_t __riscv_vluxseg8ei16_mu(vbool32_t vm, vbfloat16mf2x8_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl); +vbfloat16m1x2_t __riscv_vluxseg2ei16_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x3_t __riscv_vluxseg3ei16_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x4_t __riscv_vluxseg4ei16_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x5_t __riscv_vluxseg5ei16_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x6_t __riscv_vluxseg6ei16_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x7_t __riscv_vluxseg7ei16_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m1x8_t __riscv_vluxseg8ei16_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl); +vbfloat16m2x2_t __riscv_vluxseg2ei16_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x3_t __riscv_vluxseg3ei16_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m2x4_t __riscv_vluxseg4ei16_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl); +vbfloat16m4x2_t __riscv_vluxseg2ei16_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl); +---- + +[[policy-variant-overloadedvector-indexed-segment-store]] +==== Vector Indexed Segment Store Intrinsics +Intrinsics here don't have a policy variant. diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc new file mode 100644 index 000000000..db730fe08 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc @@ -0,0 +1,30 @@ + +=== BFloat16 Miscellaneous Vector Utility Intrinsics + +[[policy-variant-overloadedreinterpret-cast-conversion]] +==== Reinterpret Cast Conversion Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-lmul-extensionn]] +==== Vector LMUL Extension Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-lmul-truncation]] +==== Vector LMUL Truncation Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-initialization]] +==== Vector Initialization Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-insertion]] +==== Vector Insertion Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-extraction]] +==== Vector Extraction Intrinsics +Intrinsics here don't have a policy variant. + +[[policy-variant-overloadedvector-creation]] +==== Vector Creation Intrinsics +Intrinsics here don't have a policy variant. From 48060409f967d49052db5ef3fbd1a97fa8526c6e Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 08:10:28 -0700 Subject: [PATCH 005/151] [Auto-gen] Update bfloat16 tests under ../auto-generated. (make git-commit-autogen-bf16-test) --- auto-generated/bfloat16/api-testing/vcreate.c | 177 +++++++++++++++++ auto-generated/bfloat16/api-testing/vget.c | 140 ++++++++++++++ auto-generated/bfloat16/api-testing/vle16.c | 53 +++++ auto-generated/bfloat16/api-testing/vle16ff.c | 62 ++++++ .../bfloat16/api-testing/vlmul_ext_v.c | 62 ++++++ .../bfloat16/api-testing/vlmul_trunc_v.c | 62 ++++++ .../bfloat16/api-testing/vloxei16.c | 62 ++++++ .../bfloat16/api-testing/vloxseg2ei16.c | 54 ++++++ .../bfloat16/api-testing/vloxseg3ei16.c | 44 +++++ .../bfloat16/api-testing/vloxseg4ei16.c | 44 +++++ .../bfloat16/api-testing/vloxseg5ei16.c | 34 ++++ .../bfloat16/api-testing/vloxseg6ei16.c | 34 ++++ .../bfloat16/api-testing/vloxseg7ei16.c | 34 ++++ .../bfloat16/api-testing/vloxseg8ei16.c | 34 ++++ auto-generated/bfloat16/api-testing/vlse16.c | 62 ++++++ .../bfloat16/api-testing/vlseg2e16.c | 47 +++++ .../bfloat16/api-testing/vlseg2e16ff.c | 52 +++++ .../bfloat16/api-testing/vlseg3e16.c | 38 ++++ .../bfloat16/api-testing/vlseg3e16ff.c | 42 ++++ .../bfloat16/api-testing/vlseg4e16.c | 38 ++++ .../bfloat16/api-testing/vlseg4e16ff.c | 42 ++++ .../bfloat16/api-testing/vlseg5e16.c | 29 +++ .../bfloat16/api-testing/vlseg5e16ff.c | 32 ++++ .../bfloat16/api-testing/vlseg6e16.c | 29 +++ .../bfloat16/api-testing/vlseg6e16ff.c | 32 ++++ .../bfloat16/api-testing/vlseg7e16.c | 29 +++ .../bfloat16/api-testing/vlseg7e16ff.c | 32 ++++ .../bfloat16/api-testing/vlseg8e16.c | 29 +++ .../bfloat16/api-testing/vlseg8e16ff.c | 32 ++++ .../bfloat16/api-testing/vlsseg2e16.c | 52 +++++ .../bfloat16/api-testing/vlsseg3e16.c | 42 ++++ .../bfloat16/api-testing/vlsseg4e16.c | 42 ++++ .../bfloat16/api-testing/vlsseg5e16.c | 32 ++++ .../bfloat16/api-testing/vlsseg6e16.c | 32 ++++ .../bfloat16/api-testing/vlsseg7e16.c | 32 ++++ .../bfloat16/api-testing/vlsseg8e16.c | 32 ++++ .../bfloat16/api-testing/vluxei16.c | 62 ++++++ .../bfloat16/api-testing/vluxseg2ei16.c | 54 ++++++ .../bfloat16/api-testing/vluxseg3ei16.c | 44 +++++ .../bfloat16/api-testing/vluxseg4ei16.c | 44 +++++ .../bfloat16/api-testing/vluxseg5ei16.c | 34 ++++ .../bfloat16/api-testing/vluxseg6ei16.c | 34 ++++ .../bfloat16/api-testing/vluxseg7ei16.c | 34 ++++ .../bfloat16/api-testing/vluxseg8ei16.c | 34 ++++ .../bfloat16/api-testing/vreinterpret.c | 98 ++++++++++ auto-generated/bfloat16/api-testing/vse16.c | 56 ++++++ auto-generated/bfloat16/api-testing/vset.c | 171 +++++++++++++++++ .../bfloat16/api-testing/vsoxei16.c | 62 ++++++ .../bfloat16/api-testing/vsoxseg2ei16.c | 54 ++++++ .../bfloat16/api-testing/vsoxseg3ei16.c | 44 +++++ .../bfloat16/api-testing/vsoxseg4ei16.c | 44 +++++ .../bfloat16/api-testing/vsoxseg5ei16.c | 34 ++++ .../bfloat16/api-testing/vsoxseg6ei16.c | 34 ++++ .../bfloat16/api-testing/vsoxseg7ei16.c | 34 ++++ .../bfloat16/api-testing/vsoxseg8ei16.c | 34 ++++ auto-generated/bfloat16/api-testing/vsse16.c | 62 ++++++ .../bfloat16/api-testing/vsseg2e16.c | 47 +++++ .../bfloat16/api-testing/vsseg3e16.c | 38 ++++ .../bfloat16/api-testing/vsseg4e16.c | 38 ++++ .../bfloat16/api-testing/vsseg5e16.c | 29 +++ .../bfloat16/api-testing/vsseg6e16.c | 29 +++ .../bfloat16/api-testing/vsseg7e16.c | 29 +++ .../bfloat16/api-testing/vsseg8e16.c | 29 +++ .../bfloat16/api-testing/vssseg2e16.c | 52 +++++ .../bfloat16/api-testing/vssseg3e16.c | 42 ++++ .../bfloat16/api-testing/vssseg4e16.c | 42 ++++ .../bfloat16/api-testing/vssseg5e16.c | 32 ++++ .../bfloat16/api-testing/vssseg6e16.c | 32 ++++ .../bfloat16/api-testing/vssseg7e16.c | 32 ++++ .../bfloat16/api-testing/vssseg8e16.c | 32 ++++ .../bfloat16/api-testing/vsuxei16.c | 62 ++++++ .../bfloat16/api-testing/vsuxseg2ei16.c | 54 ++++++ .../bfloat16/api-testing/vsuxseg3ei16.c | 44 +++++ .../bfloat16/api-testing/vsuxseg4ei16.c | 44 +++++ .../bfloat16/api-testing/vsuxseg5ei16.c | 34 ++++ .../bfloat16/api-testing/vsuxseg6ei16.c | 34 ++++ .../bfloat16/api-testing/vsuxseg7ei16.c | 34 ++++ .../bfloat16/api-testing/vsuxseg8ei16.c | 34 ++++ .../bfloat16/api-testing/vundefined.c | 118 ++++++++++++ .../bfloat16/llvm-api-tests/vcreate.c | 181 ++++++++++++++++++ auto-generated/bfloat16/llvm-api-tests/vget.c | 145 ++++++++++++++ .../bfloat16/llvm-api-tests/vle16.c | 58 ++++++ .../bfloat16/llvm-api-tests/vle16ff.c | 67 +++++++ .../bfloat16/llvm-api-tests/vlmul_ext_v.c | 66 +++++++ .../bfloat16/llvm-api-tests/vlmul_trunc_v.c | 66 +++++++ .../bfloat16/llvm-api-tests/vloxei16.c | 66 +++++++ .../bfloat16/llvm-api-tests/vloxseg2ei16.c | 58 ++++++ .../bfloat16/llvm-api-tests/vloxseg3ei16.c | 48 +++++ .../bfloat16/llvm-api-tests/vloxseg4ei16.c | 48 +++++ .../bfloat16/llvm-api-tests/vloxseg5ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vloxseg6ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vloxseg7ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vloxseg8ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vlse16.c | 66 +++++++ .../bfloat16/llvm-api-tests/vlseg2e16.c | 51 +++++ .../bfloat16/llvm-api-tests/vlseg2e16ff.c | 57 ++++++ .../bfloat16/llvm-api-tests/vlseg3e16.c | 42 ++++ .../bfloat16/llvm-api-tests/vlseg3e16ff.c | 47 +++++ .../bfloat16/llvm-api-tests/vlseg4e16.c | 42 ++++ .../bfloat16/llvm-api-tests/vlseg4e16ff.c | 47 +++++ .../bfloat16/llvm-api-tests/vlseg5e16.c | 33 ++++ .../bfloat16/llvm-api-tests/vlseg5e16ff.c | 37 ++++ .../bfloat16/llvm-api-tests/vlseg6e16.c | 33 ++++ .../bfloat16/llvm-api-tests/vlseg6e16ff.c | 37 ++++ .../bfloat16/llvm-api-tests/vlseg7e16.c | 33 ++++ .../bfloat16/llvm-api-tests/vlseg7e16ff.c | 37 ++++ .../bfloat16/llvm-api-tests/vlseg8e16.c | 33 ++++ .../bfloat16/llvm-api-tests/vlseg8e16ff.c | 37 ++++ .../bfloat16/llvm-api-tests/vlsseg2e16.c | 56 ++++++ .../bfloat16/llvm-api-tests/vlsseg3e16.c | 46 +++++ .../bfloat16/llvm-api-tests/vlsseg4e16.c | 46 +++++ .../bfloat16/llvm-api-tests/vlsseg5e16.c | 36 ++++ .../bfloat16/llvm-api-tests/vlsseg6e16.c | 36 ++++ .../bfloat16/llvm-api-tests/vlsseg7e16.c | 36 ++++ .../bfloat16/llvm-api-tests/vlsseg8e16.c | 36 ++++ .../bfloat16/llvm-api-tests/vluxei16.c | 66 +++++++ .../bfloat16/llvm-api-tests/vluxseg2ei16.c | 58 ++++++ .../bfloat16/llvm-api-tests/vluxseg3ei16.c | 48 +++++ .../bfloat16/llvm-api-tests/vluxseg4ei16.c | 48 +++++ .../bfloat16/llvm-api-tests/vluxseg5ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vluxseg6ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vluxseg7ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vluxseg8ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vreinterpret.c | 103 ++++++++++ .../bfloat16/llvm-api-tests/vse16.c | 60 ++++++ auto-generated/bfloat16/llvm-api-tests/vset.c | 175 +++++++++++++++++ .../bfloat16/llvm-api-tests/vsoxei16.c | 66 +++++++ .../bfloat16/llvm-api-tests/vsoxseg2ei16.c | 58 ++++++ .../bfloat16/llvm-api-tests/vsoxseg3ei16.c | 48 +++++ .../bfloat16/llvm-api-tests/vsoxseg4ei16.c | 48 +++++ .../bfloat16/llvm-api-tests/vsoxseg5ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vsoxseg6ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vsoxseg7ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vsoxseg8ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vsse16.c | 66 +++++++ .../bfloat16/llvm-api-tests/vsseg2e16.c | 51 +++++ .../bfloat16/llvm-api-tests/vsseg3e16.c | 42 ++++ .../bfloat16/llvm-api-tests/vsseg4e16.c | 42 ++++ .../bfloat16/llvm-api-tests/vsseg5e16.c | 33 ++++ .../bfloat16/llvm-api-tests/vsseg6e16.c | 33 ++++ .../bfloat16/llvm-api-tests/vsseg7e16.c | 33 ++++ .../bfloat16/llvm-api-tests/vsseg8e16.c | 33 ++++ .../bfloat16/llvm-api-tests/vssseg2e16.c | 56 ++++++ .../bfloat16/llvm-api-tests/vssseg3e16.c | 46 +++++ .../bfloat16/llvm-api-tests/vssseg4e16.c | 46 +++++ .../bfloat16/llvm-api-tests/vssseg5e16.c | 36 ++++ .../bfloat16/llvm-api-tests/vssseg6e16.c | 36 ++++ .../bfloat16/llvm-api-tests/vssseg7e16.c | 36 ++++ .../bfloat16/llvm-api-tests/vssseg8e16.c | 36 ++++ .../bfloat16/llvm-api-tests/vsuxei16.c | 66 +++++++ .../bfloat16/llvm-api-tests/vsuxseg2ei16.c | 58 ++++++ .../bfloat16/llvm-api-tests/vsuxseg3ei16.c | 48 +++++ .../bfloat16/llvm-api-tests/vsuxseg4ei16.c | 48 +++++ .../bfloat16/llvm-api-tests/vsuxseg5ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vsuxseg6ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vsuxseg7ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vsuxseg8ei16.c | 38 ++++ .../bfloat16/llvm-api-tests/vundefined.c | 123 ++++++++++++ .../bfloat16/llvm-overloaded-tests/vget.c | 145 ++++++++++++++ .../bfloat16/llvm-overloaded-tests/vle16.c | 34 ++++ .../bfloat16/llvm-overloaded-tests/vle16ff.c | 37 ++++ .../llvm-overloaded-tests/vlmul_ext_v.c | 66 +++++++ .../llvm-overloaded-tests/vlmul_trunc_v.c | 66 +++++++ .../bfloat16/llvm-overloaded-tests/vloxei16.c | 66 +++++++ .../llvm-overloaded-tests/vloxseg2ei16.c | 58 ++++++ .../llvm-overloaded-tests/vloxseg3ei16.c | 48 +++++ .../llvm-overloaded-tests/vloxseg4ei16.c | 48 +++++ .../llvm-overloaded-tests/vloxseg5ei16.c | 38 ++++ .../llvm-overloaded-tests/vloxseg6ei16.c | 38 ++++ .../llvm-overloaded-tests/vloxseg7ei16.c | 38 ++++ .../llvm-overloaded-tests/vloxseg8ei16.c | 38 ++++ .../bfloat16/llvm-overloaded-tests/vlse16.c | 36 ++++ .../llvm-overloaded-tests/vlseg2e16.c | 31 +++ .../llvm-overloaded-tests/vlseg2e16ff.c | 32 ++++ .../llvm-overloaded-tests/vlseg3e16.c | 26 +++ .../llvm-overloaded-tests/vlseg3e16ff.c | 27 +++ .../llvm-overloaded-tests/vlseg4e16.c | 26 +++ .../llvm-overloaded-tests/vlseg4e16ff.c | 27 +++ .../llvm-overloaded-tests/vlseg5e16.c | 21 ++ .../llvm-overloaded-tests/vlseg5e16ff.c | 22 +++ .../llvm-overloaded-tests/vlseg6e16.c | 21 ++ .../llvm-overloaded-tests/vlseg6e16ff.c | 22 +++ .../llvm-overloaded-tests/vlseg7e16.c | 21 ++ .../llvm-overloaded-tests/vlseg7e16ff.c | 22 +++ .../llvm-overloaded-tests/vlseg8e16.c | 21 ++ .../llvm-overloaded-tests/vlseg8e16ff.c | 22 +++ .../llvm-overloaded-tests/vlsseg2e16.c | 31 +++ .../llvm-overloaded-tests/vlsseg3e16.c | 26 +++ .../llvm-overloaded-tests/vlsseg4e16.c | 26 +++ .../llvm-overloaded-tests/vlsseg5e16.c | 21 ++ .../llvm-overloaded-tests/vlsseg6e16.c | 21 ++ .../llvm-overloaded-tests/vlsseg7e16.c | 21 ++ .../llvm-overloaded-tests/vlsseg8e16.c | 21 ++ .../bfloat16/llvm-overloaded-tests/vluxei16.c | 66 +++++++ .../llvm-overloaded-tests/vluxseg2ei16.c | 58 ++++++ .../llvm-overloaded-tests/vluxseg3ei16.c | 48 +++++ .../llvm-overloaded-tests/vluxseg4ei16.c | 48 +++++ .../llvm-overloaded-tests/vluxseg5ei16.c | 38 ++++ .../llvm-overloaded-tests/vluxseg6ei16.c | 38 ++++ .../llvm-overloaded-tests/vluxseg7ei16.c | 38 ++++ .../llvm-overloaded-tests/vluxseg8ei16.c | 38 ++++ .../llvm-overloaded-tests/vreinterpret.c | 103 ++++++++++ .../bfloat16/llvm-overloaded-tests/vse16.c | 60 ++++++ .../bfloat16/llvm-overloaded-tests/vset.c | 175 +++++++++++++++++ .../bfloat16/llvm-overloaded-tests/vsoxei16.c | 66 +++++++ .../llvm-overloaded-tests/vsoxseg2ei16.c | 58 ++++++ .../llvm-overloaded-tests/vsoxseg3ei16.c | 48 +++++ .../llvm-overloaded-tests/vsoxseg4ei16.c | 48 +++++ .../llvm-overloaded-tests/vsoxseg5ei16.c | 38 ++++ .../llvm-overloaded-tests/vsoxseg6ei16.c | 38 ++++ .../llvm-overloaded-tests/vsoxseg7ei16.c | 38 ++++ .../llvm-overloaded-tests/vsoxseg8ei16.c | 38 ++++ .../bfloat16/llvm-overloaded-tests/vsse16.c | 66 +++++++ .../llvm-overloaded-tests/vsseg2e16.c | 51 +++++ .../llvm-overloaded-tests/vsseg3e16.c | 42 ++++ .../llvm-overloaded-tests/vsseg4e16.c | 42 ++++ .../llvm-overloaded-tests/vsseg5e16.c | 33 ++++ .../llvm-overloaded-tests/vsseg6e16.c | 33 ++++ .../llvm-overloaded-tests/vsseg7e16.c | 33 ++++ .../llvm-overloaded-tests/vsseg8e16.c | 33 ++++ .../llvm-overloaded-tests/vssseg2e16.c | 56 ++++++ .../llvm-overloaded-tests/vssseg3e16.c | 46 +++++ .../llvm-overloaded-tests/vssseg4e16.c | 46 +++++ .../llvm-overloaded-tests/vssseg5e16.c | 36 ++++ .../llvm-overloaded-tests/vssseg6e16.c | 36 ++++ .../llvm-overloaded-tests/vssseg7e16.c | 36 ++++ .../llvm-overloaded-tests/vssseg8e16.c | 36 ++++ .../bfloat16/llvm-overloaded-tests/vsuxei16.c | 66 +++++++ .../llvm-overloaded-tests/vsuxseg2ei16.c | 58 ++++++ .../llvm-overloaded-tests/vsuxseg3ei16.c | 48 +++++ .../llvm-overloaded-tests/vsuxseg4ei16.c | 48 +++++ .../llvm-overloaded-tests/vsuxseg5ei16.c | 38 ++++ .../llvm-overloaded-tests/vsuxseg6ei16.c | 38 ++++ .../llvm-overloaded-tests/vsuxseg7ei16.c | 38 ++++ .../llvm-overloaded-tests/vsuxseg8ei16.c | 38 ++++ .../bfloat16/overloaded-api-testing/vget.c | 140 ++++++++++++++ .../bfloat16/overloaded-api-testing/vle16.c | 29 +++ .../bfloat16/overloaded-api-testing/vle16ff.c | 32 ++++ .../overloaded-api-testing/vlmul_ext_v.c | 62 ++++++ .../overloaded-api-testing/vlmul_trunc_v.c | 62 ++++++ .../overloaded-api-testing/vloxei16.c | 62 ++++++ .../overloaded-api-testing/vloxseg2ei16.c | 54 ++++++ .../overloaded-api-testing/vloxseg3ei16.c | 44 +++++ .../overloaded-api-testing/vloxseg4ei16.c | 44 +++++ .../overloaded-api-testing/vloxseg5ei16.c | 34 ++++ .../overloaded-api-testing/vloxseg6ei16.c | 34 ++++ .../overloaded-api-testing/vloxseg7ei16.c | 34 ++++ .../overloaded-api-testing/vloxseg8ei16.c | 34 ++++ .../bfloat16/overloaded-api-testing/vlse16.c | 32 ++++ .../overloaded-api-testing/vlseg2e16.c | 27 +++ .../overloaded-api-testing/vlseg2e16ff.c | 27 +++ .../overloaded-api-testing/vlseg3e16.c | 22 +++ .../overloaded-api-testing/vlseg3e16ff.c | 22 +++ .../overloaded-api-testing/vlseg4e16.c | 22 +++ .../overloaded-api-testing/vlseg4e16ff.c | 22 +++ .../overloaded-api-testing/vlseg5e16.c | 17 ++ .../overloaded-api-testing/vlseg5e16ff.c | 17 ++ .../overloaded-api-testing/vlseg6e16.c | 17 ++ .../overloaded-api-testing/vlseg6e16ff.c | 17 ++ .../overloaded-api-testing/vlseg7e16.c | 17 ++ .../overloaded-api-testing/vlseg7e16ff.c | 17 ++ .../overloaded-api-testing/vlseg8e16.c | 17 ++ .../overloaded-api-testing/vlseg8e16ff.c | 17 ++ .../overloaded-api-testing/vlsseg2e16.c | 27 +++ .../overloaded-api-testing/vlsseg3e16.c | 22 +++ .../overloaded-api-testing/vlsseg4e16.c | 22 +++ .../overloaded-api-testing/vlsseg5e16.c | 17 ++ .../overloaded-api-testing/vlsseg6e16.c | 17 ++ .../overloaded-api-testing/vlsseg7e16.c | 17 ++ .../overloaded-api-testing/vlsseg8e16.c | 17 ++ .../overloaded-api-testing/vluxei16.c | 62 ++++++ .../overloaded-api-testing/vluxseg2ei16.c | 54 ++++++ .../overloaded-api-testing/vluxseg3ei16.c | 44 +++++ .../overloaded-api-testing/vluxseg4ei16.c | 44 +++++ .../overloaded-api-testing/vluxseg5ei16.c | 34 ++++ .../overloaded-api-testing/vluxseg6ei16.c | 34 ++++ .../overloaded-api-testing/vluxseg7ei16.c | 34 ++++ .../overloaded-api-testing/vluxseg8ei16.c | 34 ++++ .../overloaded-api-testing/vreinterpret.c | 98 ++++++++++ .../bfloat16/overloaded-api-testing/vse16.c | 56 ++++++ .../bfloat16/overloaded-api-testing/vset.c | 171 +++++++++++++++++ .../overloaded-api-testing/vsoxei16.c | 62 ++++++ .../overloaded-api-testing/vsoxseg2ei16.c | 54 ++++++ .../overloaded-api-testing/vsoxseg3ei16.c | 44 +++++ .../overloaded-api-testing/vsoxseg4ei16.c | 44 +++++ .../overloaded-api-testing/vsoxseg5ei16.c | 34 ++++ .../overloaded-api-testing/vsoxseg6ei16.c | 34 ++++ .../overloaded-api-testing/vsoxseg7ei16.c | 34 ++++ .../overloaded-api-testing/vsoxseg8ei16.c | 34 ++++ .../bfloat16/overloaded-api-testing/vsse16.c | 62 ++++++ .../overloaded-api-testing/vsseg2e16.c | 47 +++++ .../overloaded-api-testing/vsseg3e16.c | 38 ++++ .../overloaded-api-testing/vsseg4e16.c | 38 ++++ .../overloaded-api-testing/vsseg5e16.c | 29 +++ .../overloaded-api-testing/vsseg6e16.c | 29 +++ .../overloaded-api-testing/vsseg7e16.c | 29 +++ .../overloaded-api-testing/vsseg8e16.c | 29 +++ .../overloaded-api-testing/vssseg2e16.c | 52 +++++ .../overloaded-api-testing/vssseg3e16.c | 42 ++++ .../overloaded-api-testing/vssseg4e16.c | 42 ++++ .../overloaded-api-testing/vssseg5e16.c | 32 ++++ .../overloaded-api-testing/vssseg6e16.c | 32 ++++ .../overloaded-api-testing/vssseg7e16.c | 32 ++++ .../overloaded-api-testing/vssseg8e16.c | 32 ++++ .../overloaded-api-testing/vsuxei16.c | 62 ++++++ .../overloaded-api-testing/vsuxseg2ei16.c | 54 ++++++ .../overloaded-api-testing/vsuxseg3ei16.c | 44 +++++ .../overloaded-api-testing/vsuxseg4ei16.c | 44 +++++ .../overloaded-api-testing/vsuxseg5ei16.c | 34 ++++ .../overloaded-api-testing/vsuxseg6ei16.c | 34 ++++ .../overloaded-api-testing/vsuxseg7ei16.c | 34 ++++ .../overloaded-api-testing/vsuxseg8ei16.c | 34 ++++ .../bfloat16/policy_funcs/api-testing/vle16.c | 122 ++++++++++++ .../policy_funcs/api-testing/vle16ff.c | 140 ++++++++++++++ .../policy_funcs/api-testing/vloxei16.c | 140 ++++++++++++++ .../policy_funcs/api-testing/vloxseg2ei16.c | 139 ++++++++++++++ .../policy_funcs/api-testing/vloxseg3ei16.c | 113 +++++++++++ .../policy_funcs/api-testing/vloxseg4ei16.c | 113 +++++++++++ .../policy_funcs/api-testing/vloxseg5ei16.c | 87 +++++++++ .../policy_funcs/api-testing/vloxseg6ei16.c | 87 +++++++++ .../policy_funcs/api-testing/vloxseg7ei16.c | 87 +++++++++ .../policy_funcs/api-testing/vloxseg8ei16.c | 87 +++++++++ .../policy_funcs/api-testing/vlse16.c | 140 ++++++++++++++ .../policy_funcs/api-testing/vlseg2e16.c | 108 +++++++++++ .../policy_funcs/api-testing/vlseg2e16ff.c | 132 +++++++++++++ .../policy_funcs/api-testing/vlseg3e16.c | 88 +++++++++ .../policy_funcs/api-testing/vlseg3e16ff.c | 107 +++++++++++ .../policy_funcs/api-testing/vlseg4e16.c | 88 +++++++++ .../policy_funcs/api-testing/vlseg4e16ff.c | 107 +++++++++++ .../policy_funcs/api-testing/vlseg5e16.c | 68 +++++++ .../policy_funcs/api-testing/vlseg5e16ff.c | 82 ++++++++ .../policy_funcs/api-testing/vlseg6e16.c | 68 +++++++ .../policy_funcs/api-testing/vlseg6e16ff.c | 82 ++++++++ .../policy_funcs/api-testing/vlseg7e16.c | 68 +++++++ .../policy_funcs/api-testing/vlseg7e16ff.c | 82 ++++++++ .../policy_funcs/api-testing/vlseg8e16.c | 68 +++++++ .../policy_funcs/api-testing/vlseg8e16ff.c | 82 ++++++++ .../policy_funcs/api-testing/vlsseg2e16.c | 129 +++++++++++++ .../policy_funcs/api-testing/vlsseg3e16.c | 105 ++++++++++ .../policy_funcs/api-testing/vlsseg4e16.c | 105 ++++++++++ .../policy_funcs/api-testing/vlsseg5e16.c | 81 ++++++++ .../policy_funcs/api-testing/vlsseg6e16.c | 81 ++++++++ .../policy_funcs/api-testing/vlsseg7e16.c | 81 ++++++++ .../policy_funcs/api-testing/vlsseg8e16.c | 81 ++++++++ .../policy_funcs/api-testing/vluxei16.c | 140 ++++++++++++++ .../policy_funcs/api-testing/vluxseg2ei16.c | 139 ++++++++++++++ .../policy_funcs/api-testing/vluxseg3ei16.c | 113 +++++++++++ .../policy_funcs/api-testing/vluxseg4ei16.c | 113 +++++++++++ .../policy_funcs/api-testing/vluxseg5ei16.c | 87 +++++++++ .../policy_funcs/api-testing/vluxseg6ei16.c | 87 +++++++++ .../policy_funcs/api-testing/vluxseg7ei16.c | 87 +++++++++ .../policy_funcs/api-testing/vluxseg8ei16.c | 87 +++++++++ .../policy_funcs/llvm-api-tests/vle16.c | 103 ++++++++++ .../policy_funcs/llvm-api-tests/vle16ff.c | 103 ++++++++++ .../policy_funcs/llvm-api-tests/vloxei16.c | 102 ++++++++++ .../llvm-api-tests/vloxseg2ei16.c | 86 +++++++++ .../llvm-api-tests/vloxseg3ei16.c | 70 +++++++ .../llvm-api-tests/vloxseg4ei16.c | 70 +++++++ .../llvm-api-tests/vloxseg5ei16.c | 54 ++++++ .../llvm-api-tests/vloxseg6ei16.c | 54 ++++++ .../llvm-api-tests/vloxseg7ei16.c | 54 ++++++ .../llvm-api-tests/vloxseg8ei16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vlse16.c | 102 ++++++++++ .../policy_funcs/llvm-api-tests/vlseg2e16.c | 86 +++++++++ .../policy_funcs/llvm-api-tests/vlseg2e16ff.c | 87 +++++++++ .../policy_funcs/llvm-api-tests/vlseg3e16.c | 70 +++++++ .../policy_funcs/llvm-api-tests/vlseg3e16ff.c | 71 +++++++ .../policy_funcs/llvm-api-tests/vlseg4e16.c | 70 +++++++ .../policy_funcs/llvm-api-tests/vlseg4e16ff.c | 71 +++++++ .../policy_funcs/llvm-api-tests/vlseg5e16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vlseg5e16ff.c | 55 ++++++ .../policy_funcs/llvm-api-tests/vlseg6e16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vlseg6e16ff.c | 55 ++++++ .../policy_funcs/llvm-api-tests/vlseg7e16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vlseg7e16ff.c | 55 ++++++ .../policy_funcs/llvm-api-tests/vlseg8e16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vlseg8e16ff.c | 55 ++++++ .../policy_funcs/llvm-api-tests/vlsseg2e16.c | 86 +++++++++ .../policy_funcs/llvm-api-tests/vlsseg3e16.c | 70 +++++++ .../policy_funcs/llvm-api-tests/vlsseg4e16.c | 70 +++++++ .../policy_funcs/llvm-api-tests/vlsseg5e16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vlsseg6e16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vlsseg7e16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vlsseg8e16.c | 54 ++++++ .../policy_funcs/llvm-api-tests/vluxei16.c | 102 ++++++++++ .../llvm-api-tests/vluxseg2ei16.c | 86 +++++++++ .../llvm-api-tests/vluxseg3ei16.c | 70 +++++++ .../llvm-api-tests/vluxseg4ei16.c | 70 +++++++ .../llvm-api-tests/vluxseg5ei16.c | 54 ++++++ .../llvm-api-tests/vluxseg6ei16.c | 54 ++++++ .../llvm-api-tests/vluxseg7ei16.c | 54 ++++++ .../llvm-api-tests/vluxseg8ei16.c | 54 ++++++ .../llvm-overloaded-tests/vle16.c | 127 ++++++++++++ .../llvm-overloaded-tests/vle16ff.c | 145 ++++++++++++++ .../llvm-overloaded-tests/vloxei16.c | 144 ++++++++++++++ .../llvm-overloaded-tests/vloxseg2ei16.c | 143 ++++++++++++++ .../llvm-overloaded-tests/vloxseg3ei16.c | 117 +++++++++++ .../llvm-overloaded-tests/vloxseg4ei16.c | 117 +++++++++++ .../llvm-overloaded-tests/vloxseg5ei16.c | 91 +++++++++ .../llvm-overloaded-tests/vloxseg6ei16.c | 91 +++++++++ .../llvm-overloaded-tests/vloxseg7ei16.c | 91 +++++++++ .../llvm-overloaded-tests/vloxseg8ei16.c | 91 +++++++++ .../llvm-overloaded-tests/vlse16.c | 144 ++++++++++++++ .../llvm-overloaded-tests/vlseg2e16.c | 112 +++++++++++ .../llvm-overloaded-tests/vlseg2e16ff.c | 137 +++++++++++++ .../llvm-overloaded-tests/vlseg3e16.c | 92 +++++++++ .../llvm-overloaded-tests/vlseg3e16ff.c | 112 +++++++++++ .../llvm-overloaded-tests/vlseg4e16.c | 92 +++++++++ .../llvm-overloaded-tests/vlseg4e16ff.c | 112 +++++++++++ .../llvm-overloaded-tests/vlseg5e16.c | 72 +++++++ .../llvm-overloaded-tests/vlseg5e16ff.c | 87 +++++++++ .../llvm-overloaded-tests/vlseg6e16.c | 72 +++++++ .../llvm-overloaded-tests/vlseg6e16ff.c | 87 +++++++++ .../llvm-overloaded-tests/vlseg7e16.c | 72 +++++++ .../llvm-overloaded-tests/vlseg7e16ff.c | 87 +++++++++ .../llvm-overloaded-tests/vlseg8e16.c | 72 +++++++ .../llvm-overloaded-tests/vlseg8e16ff.c | 87 +++++++++ .../llvm-overloaded-tests/vlsseg2e16.c | 133 +++++++++++++ .../llvm-overloaded-tests/vlsseg3e16.c | 109 +++++++++++ .../llvm-overloaded-tests/vlsseg4e16.c | 109 +++++++++++ .../llvm-overloaded-tests/vlsseg5e16.c | 85 ++++++++ .../llvm-overloaded-tests/vlsseg6e16.c | 85 ++++++++ .../llvm-overloaded-tests/vlsseg7e16.c | 85 ++++++++ .../llvm-overloaded-tests/vlsseg8e16.c | 85 ++++++++ .../llvm-overloaded-tests/vluxei16.c | 144 ++++++++++++++ .../llvm-overloaded-tests/vluxseg2ei16.c | 143 ++++++++++++++ .../llvm-overloaded-tests/vluxseg3ei16.c | 117 +++++++++++ .../llvm-overloaded-tests/vluxseg4ei16.c | 117 +++++++++++ .../llvm-overloaded-tests/vluxseg5ei16.c | 91 +++++++++ .../llvm-overloaded-tests/vluxseg6ei16.c | 91 +++++++++ .../llvm-overloaded-tests/vluxseg7ei16.c | 91 +++++++++ .../llvm-overloaded-tests/vluxseg8ei16.c | 91 +++++++++ .../overloaded-api-testing/vle16.c | 122 ++++++++++++ .../overloaded-api-testing/vle16ff.c | 140 ++++++++++++++ .../overloaded-api-testing/vloxei16.c | 140 ++++++++++++++ .../overloaded-api-testing/vloxseg2ei16.c | 139 ++++++++++++++ .../overloaded-api-testing/vloxseg3ei16.c | 113 +++++++++++ .../overloaded-api-testing/vloxseg4ei16.c | 113 +++++++++++ .../overloaded-api-testing/vloxseg5ei16.c | 87 +++++++++ .../overloaded-api-testing/vloxseg6ei16.c | 87 +++++++++ .../overloaded-api-testing/vloxseg7ei16.c | 87 +++++++++ .../overloaded-api-testing/vloxseg8ei16.c | 87 +++++++++ .../overloaded-api-testing/vlse16.c | 140 ++++++++++++++ .../overloaded-api-testing/vlseg2e16.c | 108 +++++++++++ .../overloaded-api-testing/vlseg2e16ff.c | 132 +++++++++++++ .../overloaded-api-testing/vlseg3e16.c | 88 +++++++++ .../overloaded-api-testing/vlseg3e16ff.c | 107 +++++++++++ .../overloaded-api-testing/vlseg4e16.c | 88 +++++++++ .../overloaded-api-testing/vlseg4e16ff.c | 107 +++++++++++ .../overloaded-api-testing/vlseg5e16.c | 68 +++++++ .../overloaded-api-testing/vlseg5e16ff.c | 82 ++++++++ .../overloaded-api-testing/vlseg6e16.c | 68 +++++++ .../overloaded-api-testing/vlseg6e16ff.c | 82 ++++++++ .../overloaded-api-testing/vlseg7e16.c | 68 +++++++ .../overloaded-api-testing/vlseg7e16ff.c | 82 ++++++++ .../overloaded-api-testing/vlseg8e16.c | 68 +++++++ .../overloaded-api-testing/vlseg8e16ff.c | 82 ++++++++ .../overloaded-api-testing/vlsseg2e16.c | 129 +++++++++++++ .../overloaded-api-testing/vlsseg3e16.c | 105 ++++++++++ .../overloaded-api-testing/vlsseg4e16.c | 105 ++++++++++ .../overloaded-api-testing/vlsseg5e16.c | 81 ++++++++ .../overloaded-api-testing/vlsseg6e16.c | 81 ++++++++ .../overloaded-api-testing/vlsseg7e16.c | 81 ++++++++ .../overloaded-api-testing/vlsseg8e16.c | 81 ++++++++ .../overloaded-api-testing/vluxei16.c | 140 ++++++++++++++ .../overloaded-api-testing/vluxseg2ei16.c | 139 ++++++++++++++ .../overloaded-api-testing/vluxseg3ei16.c | 113 +++++++++++ .../overloaded-api-testing/vluxseg4ei16.c | 113 +++++++++++ .../overloaded-api-testing/vluxseg5ei16.c | 87 +++++++++ .../overloaded-api-testing/vluxseg6ei16.c | 87 +++++++++ .../overloaded-api-testing/vluxseg7ei16.c | 87 +++++++++ .../overloaded-api-testing/vluxseg8ei16.c | 87 +++++++++ 472 files changed, 29102 insertions(+) create mode 100644 auto-generated/bfloat16/api-testing/vcreate.c create mode 100644 auto-generated/bfloat16/api-testing/vget.c create mode 100644 auto-generated/bfloat16/api-testing/vle16.c create mode 100644 auto-generated/bfloat16/api-testing/vle16ff.c create mode 100644 auto-generated/bfloat16/api-testing/vlmul_ext_v.c create mode 100644 auto-generated/bfloat16/api-testing/vlmul_trunc_v.c create mode 100644 auto-generated/bfloat16/api-testing/vloxei16.c create mode 100644 auto-generated/bfloat16/api-testing/vloxseg2ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vloxseg3ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vloxseg4ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vloxseg5ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vloxseg6ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vloxseg7ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vloxseg8ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vlse16.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg2e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg2e16ff.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg3e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg3e16ff.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg4e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg4e16ff.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg5e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg5e16ff.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg6e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg6e16ff.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg7e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg7e16ff.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg8e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlseg8e16ff.c create mode 100644 auto-generated/bfloat16/api-testing/vlsseg2e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlsseg3e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlsseg4e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlsseg5e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlsseg6e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlsseg7e16.c create mode 100644 auto-generated/bfloat16/api-testing/vlsseg8e16.c create mode 100644 auto-generated/bfloat16/api-testing/vluxei16.c create mode 100644 auto-generated/bfloat16/api-testing/vluxseg2ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vluxseg3ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vluxseg4ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vluxseg5ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vluxseg6ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vluxseg7ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vluxseg8ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vreinterpret.c create mode 100644 auto-generated/bfloat16/api-testing/vse16.c create mode 100644 auto-generated/bfloat16/api-testing/vset.c create mode 100644 auto-generated/bfloat16/api-testing/vsoxei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsoxseg2ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsoxseg3ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsoxseg4ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsoxseg5ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsoxseg6ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsoxseg7ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsoxseg8ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsse16.c create mode 100644 auto-generated/bfloat16/api-testing/vsseg2e16.c create mode 100644 auto-generated/bfloat16/api-testing/vsseg3e16.c create mode 100644 auto-generated/bfloat16/api-testing/vsseg4e16.c create mode 100644 auto-generated/bfloat16/api-testing/vsseg5e16.c create mode 100644 auto-generated/bfloat16/api-testing/vsseg6e16.c create mode 100644 auto-generated/bfloat16/api-testing/vsseg7e16.c create mode 100644 auto-generated/bfloat16/api-testing/vsseg8e16.c create mode 100644 auto-generated/bfloat16/api-testing/vssseg2e16.c create mode 100644 auto-generated/bfloat16/api-testing/vssseg3e16.c create mode 100644 auto-generated/bfloat16/api-testing/vssseg4e16.c create mode 100644 auto-generated/bfloat16/api-testing/vssseg5e16.c create mode 100644 auto-generated/bfloat16/api-testing/vssseg6e16.c create mode 100644 auto-generated/bfloat16/api-testing/vssseg7e16.c create mode 100644 auto-generated/bfloat16/api-testing/vssseg8e16.c create mode 100644 auto-generated/bfloat16/api-testing/vsuxei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsuxseg2ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsuxseg3ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsuxseg4ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsuxseg5ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsuxseg6ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsuxseg7ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vsuxseg8ei16.c create mode 100644 auto-generated/bfloat16/api-testing/vundefined.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vcreate.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vget.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vle16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vle16ff.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vloxei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlse16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vluxei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vreinterpret.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vse16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vset.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsoxei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsse16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsuxei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vundefined.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vget.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vle16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vse16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vset.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vget.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vle16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vle16ff.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlmul_ext_v.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlmul_trunc_v.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vloxei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vloxseg2ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vloxseg3ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vloxseg4ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vloxseg5ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vloxseg6ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vloxseg7ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vloxseg8ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlse16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg2e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg2e16ff.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg3e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg3e16ff.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg4e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg4e16ff.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg5e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg5e16ff.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg6e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg6e16ff.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg7e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg7e16ff.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg8e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlseg8e16ff.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlsseg2e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlsseg3e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlsseg4e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlsseg5e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlsseg6e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlsseg7e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vlsseg8e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vluxei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vluxseg2ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vluxseg3ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vluxseg4ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vluxseg5ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vluxseg6ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vluxseg7ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vluxseg8ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vreinterpret.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vse16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vset.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsoxei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsoxseg2ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsoxseg3ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsoxseg4ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsoxseg5ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsoxseg6ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsoxseg7ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsoxseg8ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsse16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsseg2e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsseg3e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsseg4e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsseg5e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsseg6e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsseg7e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsseg8e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vssseg2e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vssseg3e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vssseg4e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vssseg5e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vssseg6e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vssseg7e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vssseg8e16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsuxei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsuxseg2ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsuxseg3ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsuxseg4ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsuxseg5ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsuxseg6ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsuxseg7ei16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vsuxseg8ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vle16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vle16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vloxei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vloxseg2ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vloxseg3ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vloxseg4ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vloxseg5ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vloxseg6ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vloxseg7ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vloxseg8ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlse16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg2e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg2e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg3e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg3e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg4e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg4e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg5e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg5e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg6e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg6e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg7e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg7e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg8e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlseg8e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlsseg2e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlsseg3e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlsseg4e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlsseg5e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlsseg6e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlsseg7e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vlsseg8e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vluxei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vluxseg2ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vluxseg3ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vluxseg4ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vluxseg5ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vluxseg6ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vluxseg7ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vluxseg8ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vle16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vle16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg2ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg3ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg4ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg5ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg6ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg7ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg8ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlse16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg2e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg2e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg3e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg3e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg4e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg4e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg5e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg5e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg6e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg6e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg7e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg7e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg8e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg8e16ff.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg2e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg3e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg4e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg5e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg6e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg7e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg8e16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg2ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg3ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg4ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg5ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg6ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg7ei16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg8ei16.c diff --git a/auto-generated/bfloat16/api-testing/vcreate.c b/auto-generated/bfloat16/api-testing/vcreate.c new file mode 100644 index 000000000..6f3316ad1 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vcreate.c @@ -0,0 +1,177 @@ +#include +#include + +vbfloat16m2_t test_vcreate_v_bf16m1_bf16m2(vbfloat16m1_t v0, vbfloat16m1_t v1) { + return __riscv_vcreate_v_bf16m1_bf16m2(v0, v1); +} + +vbfloat16m4_t test_vcreate_v_bf16m1_bf16m4(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3) { + return __riscv_vcreate_v_bf16m1_bf16m4(v0, v1, v2, v3); +} + +vbfloat16m8_t test_vcreate_v_bf16m1_bf16m8(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6, vbfloat16m1_t v7) { + return __riscv_vcreate_v_bf16m1_bf16m8(v0, v1, v2, v3, v4, v5, v6, v7); +} + +vbfloat16m4_t test_vcreate_v_bf16m2_bf16m4(vbfloat16m2_t v0, vbfloat16m2_t v1) { + return __riscv_vcreate_v_bf16m2_bf16m4(v0, v1); +} + +vbfloat16m8_t test_vcreate_v_bf16m2_bf16m8(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2, vbfloat16m2_t v3) { + return __riscv_vcreate_v_bf16m2_bf16m8(v0, v1, v2, v3); +} + +vbfloat16m8_t test_vcreate_v_bf16m4_bf16m8(vbfloat16m4_t v0, vbfloat16m4_t v1) { + return __riscv_vcreate_v_bf16m4_bf16m8(v0, v1); +} + +vbfloat16mf4x2_t test_vcreate_v_bf16mf4x2(vbfloat16mf4_t v0, + vbfloat16mf4_t v1) { + return __riscv_vcreate_v_bf16mf4x2(v0, v1); +} + +vbfloat16mf4x3_t test_vcreate_v_bf16mf4x3(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2) { + return __riscv_vcreate_v_bf16mf4x3(v0, v1, v2); +} + +vbfloat16mf4x4_t test_vcreate_v_bf16mf4x4(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, + vbfloat16mf4_t v3) { + return __riscv_vcreate_v_bf16mf4x4(v0, v1, v2, v3); +} + +vbfloat16mf4x5_t test_vcreate_v_bf16mf4x5(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4) { + return __riscv_vcreate_v_bf16mf4x5(v0, v1, v2, v3, v4); +} + +vbfloat16mf4x6_t test_vcreate_v_bf16mf4x6(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, + vbfloat16mf4_t v5) { + return __riscv_vcreate_v_bf16mf4x6(v0, v1, v2, v3, v4, v5); +} + +vbfloat16mf4x7_t test_vcreate_v_bf16mf4x7(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5, + vbfloat16mf4_t v6) { + return __riscv_vcreate_v_bf16mf4x7(v0, v1, v2, v3, v4, v5, v6); +} + +vbfloat16mf4x8_t test_vcreate_v_bf16mf4x8(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5, + vbfloat16mf4_t v6, + vbfloat16mf4_t v7) { + return __riscv_vcreate_v_bf16mf4x8(v0, v1, v2, v3, v4, v5, v6, v7); +} + +vbfloat16mf2x2_t test_vcreate_v_bf16mf2x2(vbfloat16mf2_t v0, + vbfloat16mf2_t v1) { + return __riscv_vcreate_v_bf16mf2x2(v0, v1); +} + +vbfloat16mf2x3_t test_vcreate_v_bf16mf2x3(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2) { + return __riscv_vcreate_v_bf16mf2x3(v0, v1, v2); +} + +vbfloat16mf2x4_t test_vcreate_v_bf16mf2x4(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, + vbfloat16mf2_t v3) { + return __riscv_vcreate_v_bf16mf2x4(v0, v1, v2, v3); +} + +vbfloat16mf2x5_t test_vcreate_v_bf16mf2x5(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4) { + return __riscv_vcreate_v_bf16mf2x5(v0, v1, v2, v3, v4); +} + +vbfloat16mf2x6_t test_vcreate_v_bf16mf2x6(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, + vbfloat16mf2_t v5) { + return __riscv_vcreate_v_bf16mf2x6(v0, v1, v2, v3, v4, v5); +} + +vbfloat16mf2x7_t test_vcreate_v_bf16mf2x7(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5, + vbfloat16mf2_t v6) { + return __riscv_vcreate_v_bf16mf2x7(v0, v1, v2, v3, v4, v5, v6); +} + +vbfloat16mf2x8_t test_vcreate_v_bf16mf2x8(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5, + vbfloat16mf2_t v6, + vbfloat16mf2_t v7) { + return __riscv_vcreate_v_bf16mf2x8(v0, v1, v2, v3, v4, v5, v6, v7); +} + +vbfloat16m1x2_t test_vcreate_v_bf16m1x2(vbfloat16m1_t v0, vbfloat16m1_t v1) { + return __riscv_vcreate_v_bf16m1x2(v0, v1); +} + +vbfloat16m1x3_t test_vcreate_v_bf16m1x3(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2) { + return __riscv_vcreate_v_bf16m1x3(v0, v1, v2); +} + +vbfloat16m1x4_t test_vcreate_v_bf16m1x4(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3) { + return __riscv_vcreate_v_bf16m1x4(v0, v1, v2, v3); +} + +vbfloat16m1x5_t test_vcreate_v_bf16m1x5(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4) { + return __riscv_vcreate_v_bf16m1x5(v0, v1, v2, v3, v4); +} + +vbfloat16m1x6_t test_vcreate_v_bf16m1x6(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5) { + return __riscv_vcreate_v_bf16m1x6(v0, v1, v2, v3, v4, v5); +} + +vbfloat16m1x7_t test_vcreate_v_bf16m1x7(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6) { + return __riscv_vcreate_v_bf16m1x7(v0, v1, v2, v3, v4, v5, v6); +} + +vbfloat16m1x8_t test_vcreate_v_bf16m1x8(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6, vbfloat16m1_t v7) { + return __riscv_vcreate_v_bf16m1x8(v0, v1, v2, v3, v4, v5, v6, v7); +} + +vbfloat16m2x2_t test_vcreate_v_bf16m2x2(vbfloat16m2_t v0, vbfloat16m2_t v1) { + return __riscv_vcreate_v_bf16m2x2(v0, v1); +} + +vbfloat16m2x3_t test_vcreate_v_bf16m2x3(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2) { + return __riscv_vcreate_v_bf16m2x3(v0, v1, v2); +} + +vbfloat16m2x4_t test_vcreate_v_bf16m2x4(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2, vbfloat16m2_t v3) { + return __riscv_vcreate_v_bf16m2x4(v0, v1, v2, v3); +} + +vbfloat16m4x2_t test_vcreate_v_bf16m4x2(vbfloat16m4_t v0, vbfloat16m4_t v1) { + return __riscv_vcreate_v_bf16m4x2(v0, v1); +} diff --git a/auto-generated/bfloat16/api-testing/vget.c b/auto-generated/bfloat16/api-testing/vget.c new file mode 100644 index 000000000..0eafb6875 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vget.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16m1_t test_vget_v_bf16m2_bf16m1(vbfloat16m2_t src, size_t index) { + return __riscv_vget_v_bf16m2_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m4_bf16m1(vbfloat16m4_t src, size_t index) { + return __riscv_vget_v_bf16m4_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m8_bf16m1(vbfloat16m8_t src, size_t index) { + return __riscv_vget_v_bf16m8_bf16m1(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m4_bf16m2(vbfloat16m4_t src, size_t index) { + return __riscv_vget_v_bf16m4_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m8_bf16m2(vbfloat16m8_t src, size_t index) { + return __riscv_vget_v_bf16m8_bf16m2(src, 0); +} + +vbfloat16m4_t test_vget_v_bf16m8_bf16m4(vbfloat16m8_t src, size_t index) { + return __riscv_vget_v_bf16m8_bf16m4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x2_bf16mf4(vbfloat16mf4x2_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x2_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x3_bf16mf4(vbfloat16mf4x3_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x3_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x4_bf16mf4(vbfloat16mf4x4_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x4_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x5_bf16mf4(vbfloat16mf4x5_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x5_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x6_bf16mf4(vbfloat16mf4x6_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x6_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x7_bf16mf4(vbfloat16mf4x7_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x7_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x8_bf16mf4(vbfloat16mf4x8_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x8_bf16mf4(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x2_bf16mf2(vbfloat16mf2x2_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x2_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x3_bf16mf2(vbfloat16mf2x3_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x3_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x4_bf16mf2(vbfloat16mf2x4_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x4_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x5_bf16mf2(vbfloat16mf2x5_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x5_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x6_bf16mf2(vbfloat16mf2x6_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x6_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x7_bf16mf2(vbfloat16mf2x7_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x7_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x8_bf16mf2(vbfloat16mf2x8_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x8_bf16mf2(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x2_bf16m1(vbfloat16m1x2_t src, size_t index) { + return __riscv_vget_v_bf16m1x2_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x3_bf16m1(vbfloat16m1x3_t src, size_t index) { + return __riscv_vget_v_bf16m1x3_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x4_bf16m1(vbfloat16m1x4_t src, size_t index) { + return __riscv_vget_v_bf16m1x4_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x5_bf16m1(vbfloat16m1x5_t src, size_t index) { + return __riscv_vget_v_bf16m1x5_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x6_bf16m1(vbfloat16m1x6_t src, size_t index) { + return __riscv_vget_v_bf16m1x6_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x7_bf16m1(vbfloat16m1x7_t src, size_t index) { + return __riscv_vget_v_bf16m1x7_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x8_bf16m1(vbfloat16m1x8_t src, size_t index) { + return __riscv_vget_v_bf16m1x8_bf16m1(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x2_bf16m2(vbfloat16m2x2_t src, size_t index) { + return __riscv_vget_v_bf16m2x2_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x3_bf16m2(vbfloat16m2x3_t src, size_t index) { + return __riscv_vget_v_bf16m2x3_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x4_bf16m2(vbfloat16m2x4_t src, size_t index) { + return __riscv_vget_v_bf16m2x4_bf16m2(src, 0); +} + +vbfloat16m4_t test_vget_v_bf16m4x2_bf16m4(vbfloat16m4x2_t src, size_t index) { + return __riscv_vget_v_bf16m4x2_bf16m4(src, 0); +} diff --git a/auto-generated/bfloat16/api-testing/vle16.c b/auto-generated/bfloat16/api-testing/vle16.c new file mode 100644 index 000000000..ba320a6cb --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vle16.c @@ -0,0 +1,53 @@ +#include +#include + +vbfloat16mf4_t test_vle16_v_bf16mf4(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4(rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2(rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1(rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2(rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4(rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8(rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16mf4_m(vm, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16mf2_m(vm, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16m1_m(vm, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_m(vm, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_m(vm, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vle16ff.c b/auto-generated/bfloat16/api-testing/vle16ff.c new file mode 100644 index 000000000..f8d37c7dd --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vle16ff.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf4_t test_vle16ff_v_bf16mf4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf4(rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf2(rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m1(rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m2(rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m4(rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m8(rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf4_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m1_m(vm, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m4_m(vm, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m8_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlmul_ext_v.c b/auto-generated/bfloat16/api-testing/vlmul_ext_v.c new file mode 100644 index 000000000..1b9fdf349 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlmul_ext_v.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf2_t test_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16m1(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16m8(value); +} + +vbfloat16m1_t test_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_b16mf2_b16m1(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_b16mf2_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_b16mf2_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_b16mf2_b16m8(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_b16m1_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_b16m1_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_b16m1_b16m8(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value) { + return __riscv_vlmul_ext_v_b16m2_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value) { + return __riscv_vlmul_ext_v_b16m2_b16m8(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value) { + return __riscv_vlmul_ext_v_b16m4_b16m8(value); +} diff --git a/auto-generated/bfloat16/api-testing/vlmul_trunc_v.c b/auto-generated/bfloat16/api-testing/vlmul_trunc_v.c new file mode 100644 index 000000000..62c0d056a --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlmul_trunc_v.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf4_t test_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value) { + return __riscv_vlmul_trunc_v_b16mf2_b16mf4(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_v_b16m1_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_v_b16m1_b16mf2(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_b16m2_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_b16m2_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_b16m2_b16m1(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_b16m4_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_b16m4_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_b16m4_b16m1(value); +} + +vbfloat16m2_t test_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_b16m4_b16m2(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16m1(value); +} + +vbfloat16m2_t test_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16m2(value); +} + +vbfloat16m4_t test_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16m4(value); +} diff --git a/auto-generated/bfloat16/api-testing/vloxei16.c b/auto-generated/bfloat16/api-testing/vloxei16.c new file mode 100644 index 000000000..86b076156 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vloxei16.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf4_t test_vloxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf4(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf2(rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m1(rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m2(rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m4(rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m8(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m1_m(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m4_m(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vloxseg2ei16.c b/auto-generated/bfloat16/api-testing/vloxseg2ei16.c new file mode 100644 index 000000000..1ee5de8c9 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vloxseg2ei16.c @@ -0,0 +1,54 @@ +#include +#include + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vloxseg3ei16.c b/auto-generated/bfloat16/api-testing/vloxseg3ei16.c new file mode 100644 index 000000000..0f8f21676 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vloxseg3ei16.c @@ -0,0 +1,44 @@ +#include +#include + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vloxseg4ei16.c b/auto-generated/bfloat16/api-testing/vloxseg4ei16.c new file mode 100644 index 000000000..535f74024 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vloxseg4ei16.c @@ -0,0 +1,44 @@ +#include +#include + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vloxseg5ei16.c b/auto-generated/bfloat16/api-testing/vloxseg5ei16.c new file mode 100644 index 000000000..294b40dee --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vloxseg5ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vloxseg6ei16.c b/auto-generated/bfloat16/api-testing/vloxseg6ei16.c new file mode 100644 index 000000000..17c579abf --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vloxseg6ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vloxseg7ei16.c b/auto-generated/bfloat16/api-testing/vloxseg7ei16.c new file mode 100644 index 000000000..f0e04f0f8 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vloxseg7ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vloxseg8ei16.c b/auto-generated/bfloat16/api-testing/vloxseg8ei16.c new file mode 100644 index 000000000..19a53eadd --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vloxseg8ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlse16.c b/auto-generated/bfloat16/api-testing/vlse16.c new file mode 100644 index 000000000..6ec4a53a8 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlse16.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf4_t test_vlse16_v_bf16mf4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf4(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf2(rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m1(rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m2(rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m4(rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m8(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m1_m(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m4_m(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg2e16.c b/auto-generated/bfloat16/api-testing/vlseg2e16.c new file mode 100644 index 000000000..1db59e83f --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg2e16.c @@ -0,0 +1,47 @@ +#include +#include + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2(rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2(rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2(rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2(rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2(rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_m(vm, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_m(vm, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_m(vm, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_m(vm, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg2e16ff.c b/auto-generated/bfloat16/api-testing/vlseg2e16ff.c new file mode 100644 index 000000000..cd0e7e381 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg2e16ff.c @@ -0,0 +1,52 @@ +#include +#include + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2(rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2(rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2(rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2(rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2(rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg3e16.c b/auto-generated/bfloat16/api-testing/vlseg3e16.c new file mode 100644 index 000000000..52e98dcc0 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg3e16.c @@ -0,0 +1,38 @@ +#include +#include + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3(rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3(rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3(rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3(rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_m(vm, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_m(vm, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_m(vm, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg3e16ff.c b/auto-generated/bfloat16/api-testing/vlseg3e16ff.c new file mode 100644 index 000000000..623bb8f18 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg3e16ff.c @@ -0,0 +1,42 @@ +#include +#include + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3(rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3(rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3(rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3(rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_m(vm, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg4e16.c b/auto-generated/bfloat16/api-testing/vlseg4e16.c new file mode 100644 index 000000000..b0d4a9411 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg4e16.c @@ -0,0 +1,38 @@ +#include +#include + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4(rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4(rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4(rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4(rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_m(vm, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_m(vm, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_m(vm, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg4e16ff.c b/auto-generated/bfloat16/api-testing/vlseg4e16ff.c new file mode 100644 index 000000000..7e76bc96a --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg4e16ff.c @@ -0,0 +1,42 @@ +#include +#include + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4(rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4(rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4(rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4(rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_m(vm, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg5e16.c b/auto-generated/bfloat16/api-testing/vlseg5e16.c new file mode 100644 index 000000000..a36ca8401 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg5e16.c @@ -0,0 +1,29 @@ +#include +#include + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5(rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5(rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5(rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_m(vm, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_m(vm, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg5e16ff.c b/auto-generated/bfloat16/api-testing/vlseg5e16ff.c new file mode 100644 index 000000000..ae2f49900 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg5e16ff.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5(rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5(rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5(rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg6e16.c b/auto-generated/bfloat16/api-testing/vlseg6e16.c new file mode 100644 index 000000000..fc96aabaf --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg6e16.c @@ -0,0 +1,29 @@ +#include +#include + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6(rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6(rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6(rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_m(vm, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_m(vm, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg6e16ff.c b/auto-generated/bfloat16/api-testing/vlseg6e16ff.c new file mode 100644 index 000000000..600f39ed0 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg6e16ff.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6(rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6(rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6(rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg7e16.c b/auto-generated/bfloat16/api-testing/vlseg7e16.c new file mode 100644 index 000000000..530d67b29 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg7e16.c @@ -0,0 +1,29 @@ +#include +#include + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7(rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7(rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7(rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_m(vm, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_m(vm, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg7e16ff.c b/auto-generated/bfloat16/api-testing/vlseg7e16ff.c new file mode 100644 index 000000000..918c59ae5 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg7e16ff.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7(rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7(rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7(rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg8e16.c b/auto-generated/bfloat16/api-testing/vlseg8e16.c new file mode 100644 index 000000000..4a3576db4 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg8e16.c @@ -0,0 +1,29 @@ +#include +#include + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8(rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8(rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8(rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_m(vm, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_m(vm, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlseg8e16ff.c b/auto-generated/bfloat16/api-testing/vlseg8e16ff.c new file mode 100644 index 000000000..16d539e22 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlseg8e16ff.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8(rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8(rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8(rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlsseg2e16.c b/auto-generated/bfloat16/api-testing/vlsseg2e16.c new file mode 100644 index 000000000..444299755 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlsseg2e16.c @@ -0,0 +1,52 @@ +#include +#include + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlsseg3e16.c b/auto-generated/bfloat16/api-testing/vlsseg3e16.c new file mode 100644 index 000000000..02b38c6ea --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlsseg3e16.c @@ -0,0 +1,42 @@ +#include +#include + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlsseg4e16.c b/auto-generated/bfloat16/api-testing/vlsseg4e16.c new file mode 100644 index 000000000..629326dc1 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlsseg4e16.c @@ -0,0 +1,42 @@ +#include +#include + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlsseg5e16.c b/auto-generated/bfloat16/api-testing/vlsseg5e16.c new file mode 100644 index 000000000..82f62d786 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlsseg5e16.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlsseg6e16.c b/auto-generated/bfloat16/api-testing/vlsseg6e16.c new file mode 100644 index 000000000..aa9e7083a --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlsseg6e16.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlsseg7e16.c b/auto-generated/bfloat16/api-testing/vlsseg7e16.c new file mode 100644 index 000000000..01b6fd2d8 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlsseg7e16.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vlsseg8e16.c b/auto-generated/bfloat16/api-testing/vlsseg8e16.c new file mode 100644 index 000000000..65b6e157e --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vlsseg8e16.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vluxei16.c b/auto-generated/bfloat16/api-testing/vluxei16.c new file mode 100644 index 000000000..47f978c37 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vluxei16.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf4_t test_vluxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf4(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf2(rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m1(rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m2(rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m4(rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m8(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m1_m(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m4_m(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vluxseg2ei16.c b/auto-generated/bfloat16/api-testing/vluxseg2ei16.c new file mode 100644 index 000000000..67ab0184d --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vluxseg2ei16.c @@ -0,0 +1,54 @@ +#include +#include + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vluxseg3ei16.c b/auto-generated/bfloat16/api-testing/vluxseg3ei16.c new file mode 100644 index 000000000..3f43a614d --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vluxseg3ei16.c @@ -0,0 +1,44 @@ +#include +#include + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vluxseg4ei16.c b/auto-generated/bfloat16/api-testing/vluxseg4ei16.c new file mode 100644 index 000000000..942ccef90 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vluxseg4ei16.c @@ -0,0 +1,44 @@ +#include +#include + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vluxseg5ei16.c b/auto-generated/bfloat16/api-testing/vluxseg5ei16.c new file mode 100644 index 000000000..81f396ba6 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vluxseg5ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vluxseg6ei16.c b/auto-generated/bfloat16/api-testing/vluxseg6ei16.c new file mode 100644 index 000000000..6f0aaa56b --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vluxseg6ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vluxseg7ei16.c b/auto-generated/bfloat16/api-testing/vluxseg7ei16.c new file mode 100644 index 000000000..dd1c46108 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vluxseg7ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vluxseg8ei16.c b/auto-generated/bfloat16/api-testing/vluxseg8ei16.c new file mode 100644 index 000000000..ea3d2be1e --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vluxseg8ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vreinterpret.c b/auto-generated/bfloat16/api-testing/vreinterpret.c new file mode 100644 index 000000000..64576fffa --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vreinterpret.c @@ -0,0 +1,98 @@ +#include +#include + +vbfloat16mf4_t test_vreinterpret_v_i16mf4_bf16mf4(vint16mf4_t src) { + return __riscv_vreinterpret_v_i16mf4_bf16mf4(src); +} + +vbfloat16mf2_t test_vreinterpret_v_i16mf2_bf16mf2(vint16mf2_t src) { + return __riscv_vreinterpret_v_i16mf2_bf16mf2(src); +} + +vbfloat16m1_t test_vreinterpret_v_i16m1_bf16m1(vint16m1_t src) { + return __riscv_vreinterpret_v_i16m1_bf16m1(src); +} + +vbfloat16m2_t test_vreinterpret_v_i16m2_bf16m2(vint16m2_t src) { + return __riscv_vreinterpret_v_i16m2_bf16m2(src); +} + +vbfloat16m4_t test_vreinterpret_v_i16m4_bf16m4(vint16m4_t src) { + return __riscv_vreinterpret_v_i16m4_bf16m4(src); +} + +vbfloat16m8_t test_vreinterpret_v_i16m8_bf16m8(vint16m8_t src) { + return __riscv_vreinterpret_v_i16m8_bf16m8(src); +} + +vbfloat16mf4_t test_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src) { + return __riscv_vreinterpret_v_ui16mf4_bf16mf4(src); +} + +vbfloat16mf2_t test_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src) { + return __riscv_vreinterpret_v_ui16mf2_bf16mf2(src); +} + +vbfloat16m1_t test_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src) { + return __riscv_vreinterpret_v_ui16m1_bf16m1(src); +} + +vbfloat16m2_t test_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src) { + return __riscv_vreinterpret_v_ui16m2_bf16m2(src); +} + +vbfloat16m4_t test_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src) { + return __riscv_vreinterpret_v_ui16m4_bf16m4(src); +} + +vbfloat16m8_t test_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src) { + return __riscv_vreinterpret_v_ui16m8_bf16m8(src); +} + +vint16mf4_t test_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_v_bf16mf4_i16mf4(src); +} + +vint16mf2_t test_vreinterpret_v_bf16mf2_i16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_v_bf16mf2_i16mf2(src); +} + +vint16m1_t test_vreinterpret_v_bf16m1_i16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_v_bf16m1_i16m1(src); +} + +vint16m2_t test_vreinterpret_v_bf16m2_i16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_v_bf16m2_i16m2(src); +} + +vint16m4_t test_vreinterpret_v_bf16m4_i16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_v_bf16m4_i16m4(src); +} + +vint16m8_t test_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_v_bf16m8_i16m8(src); +} + +vuint16mf4_t test_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_v_bf16mf4_ui16mf4(src); +} + +vuint16mf2_t test_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_v_bf16mf2_ui16mf2(src); +} + +vuint16m1_t test_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_v_bf16m1_ui16m1(src); +} + +vuint16m2_t test_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_v_bf16m2_ui16m2(src); +} + +vuint16m4_t test_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_v_bf16m4_ui16m4(src); +} + +vuint16m8_t test_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_v_bf16m8_ui16m8(src); +} diff --git a/auto-generated/bfloat16/api-testing/vse16.c b/auto-generated/bfloat16/api-testing/vse16.c new file mode 100644 index 000000000..fa8c4d20f --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vse16.c @@ -0,0 +1,56 @@ +#include +#include + +void test_vse16_v_bf16mf4(__bf16 *rs1, vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vse16_v_bf16mf4(rs1, vs3, vl); +} + +void test_vse16_v_bf16mf2(__bf16 *rs1, vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vse16_v_bf16mf2(rs1, vs3, vl); +} + +void test_vse16_v_bf16m1(__bf16 *rs1, vbfloat16m1_t vs3, size_t vl) { + return __riscv_vse16_v_bf16m1(rs1, vs3, vl); +} + +void test_vse16_v_bf16m2(__bf16 *rs1, vbfloat16m2_t vs3, size_t vl) { + return __riscv_vse16_v_bf16m2(rs1, vs3, vl); +} + +void test_vse16_v_bf16m4(__bf16 *rs1, vbfloat16m4_t vs3, size_t vl) { + return __riscv_vse16_v_bf16m4(rs1, vs3, vl); +} + +void test_vse16_v_bf16m8(__bf16 *rs1, vbfloat16m8_t vs3, size_t vl) { + return __riscv_vse16_v_bf16m8(rs1, vs3, vl); +} + +void test_vse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16mf4_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16mf2_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16m1_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16m2_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16m4_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16m8_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vset.c b/auto-generated/bfloat16/api-testing/vset.c new file mode 100644 index 000000000..df82323a4 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vset.c @@ -0,0 +1,171 @@ +#include +#include + +vbfloat16m2_t test_vset_v_bf16m1_bf16m2(vbfloat16m2_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m2(dest, 0, value); +} + +vbfloat16m4_t test_vset_v_bf16m1_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m4(dest, 0, value); +} + +vbfloat16m4_t test_vset_v_bf16m2_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m4(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m1_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m8(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m2_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m8(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m4_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m4_t value) { + return __riscv_vset_v_bf16m4_bf16m8(dest, 0, value); +} + +vbfloat16mf4x2_t test_vset_v_bf16mf4_bf16mf4x2(vbfloat16mf4x2_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x2(dest, 0, value); +} + +vbfloat16mf4x3_t test_vset_v_bf16mf4_bf16mf4x3(vbfloat16mf4x3_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x3(dest, 0, value); +} + +vbfloat16mf4x4_t test_vset_v_bf16mf4_bf16mf4x4(vbfloat16mf4x4_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x4(dest, 0, value); +} + +vbfloat16mf4x5_t test_vset_v_bf16mf4_bf16mf4x5(vbfloat16mf4x5_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x5(dest, 0, value); +} + +vbfloat16mf4x6_t test_vset_v_bf16mf4_bf16mf4x6(vbfloat16mf4x6_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x6(dest, 0, value); +} + +vbfloat16mf4x7_t test_vset_v_bf16mf4_bf16mf4x7(vbfloat16mf4x7_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x7(dest, 0, value); +} + +vbfloat16mf4x8_t test_vset_v_bf16mf4_bf16mf4x8(vbfloat16mf4x8_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x8(dest, 0, value); +} + +vbfloat16mf2x2_t test_vset_v_bf16mf2_bf16mf2x2(vbfloat16mf2x2_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x2(dest, 0, value); +} + +vbfloat16mf2x3_t test_vset_v_bf16mf2_bf16mf2x3(vbfloat16mf2x3_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x3(dest, 0, value); +} + +vbfloat16mf2x4_t test_vset_v_bf16mf2_bf16mf2x4(vbfloat16mf2x4_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x4(dest, 0, value); +} + +vbfloat16mf2x5_t test_vset_v_bf16mf2_bf16mf2x5(vbfloat16mf2x5_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x5(dest, 0, value); +} + +vbfloat16mf2x6_t test_vset_v_bf16mf2_bf16mf2x6(vbfloat16mf2x6_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x6(dest, 0, value); +} + +vbfloat16mf2x7_t test_vset_v_bf16mf2_bf16mf2x7(vbfloat16mf2x7_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x7(dest, 0, value); +} + +vbfloat16mf2x8_t test_vset_v_bf16mf2_bf16mf2x8(vbfloat16mf2x8_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x8(dest, 0, value); +} + +vbfloat16m1x2_t test_vset_v_bf16m1_bf16m1x2(vbfloat16m1x2_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x2(dest, 0, value); +} + +vbfloat16m1x3_t test_vset_v_bf16m1_bf16m1x3(vbfloat16m1x3_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x3(dest, 0, value); +} + +vbfloat16m1x4_t test_vset_v_bf16m1_bf16m1x4(vbfloat16m1x4_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x4(dest, 0, value); +} + +vbfloat16m1x5_t test_vset_v_bf16m1_bf16m1x5(vbfloat16m1x5_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x5(dest, 0, value); +} + +vbfloat16m1x6_t test_vset_v_bf16m1_bf16m1x6(vbfloat16m1x6_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x6(dest, 0, value); +} + +vbfloat16m1x7_t test_vset_v_bf16m1_bf16m1x7(vbfloat16m1x7_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x7(dest, 0, value); +} + +vbfloat16m1x8_t test_vset_v_bf16m1_bf16m1x8(vbfloat16m1x8_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x8(dest, 0, value); +} + +vbfloat16m2x2_t test_vset_v_bf16m2_bf16m2x2(vbfloat16m2x2_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m2x2(dest, 0, value); +} + +vbfloat16m2x3_t test_vset_v_bf16m2_bf16m2x3(vbfloat16m2x3_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m2x3(dest, 0, value); +} + +vbfloat16m2x4_t test_vset_v_bf16m2_bf16m2x4(vbfloat16m2x4_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m2x4(dest, 0, value); +} + +vbfloat16m4x2_t test_vset_v_bf16m4_bf16m4x2(vbfloat16m4x2_t dest, size_t index, + vbfloat16m4_t value) { + return __riscv_vset_v_bf16m4_bf16m4x2(dest, 0, value); +} diff --git a/auto-generated/bfloat16/api-testing/vsoxei16.c b/auto-generated/bfloat16/api-testing/vsoxei16.c new file mode 100644 index 000000000..730d0d479 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsoxei16.c @@ -0,0 +1,62 @@ +#include +#include + +void test_vsoxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16mf4(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16mf2(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16m1(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16m2(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16m4(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16m8(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16mf4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16mf2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16m1_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16m2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16m4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16m8_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsoxseg2ei16.c b/auto-generated/bfloat16/api-testing/vsoxseg2ei16.c new file mode 100644 index 000000000..4a8bf8606 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsoxseg2ei16.c @@ -0,0 +1,54 @@ +#include +#include + +void test_vsoxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16mf4x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16mf2x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m1x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m2x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m4x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16mf4x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16mf2x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m1x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m2x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m4x2_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsoxseg3ei16.c b/auto-generated/bfloat16/api-testing/vsoxseg3ei16.c new file mode 100644 index 000000000..8eef4e9a2 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsoxseg3ei16.c @@ -0,0 +1,44 @@ +#include +#include + +void test_vsoxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16mf4x3(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16mf2x3(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16m1x3(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16m2x3(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16mf4x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16mf2x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16m1x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16m2x3_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsoxseg4ei16.c b/auto-generated/bfloat16/api-testing/vsoxseg4ei16.c new file mode 100644 index 000000000..f06ecf271 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsoxseg4ei16.c @@ -0,0 +1,44 @@ +#include +#include + +void test_vsoxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16mf4x4(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16mf2x4(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16m1x4(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16m2x4(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16mf4x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16mf2x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16m1x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16m2x4_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsoxseg5ei16.c b/auto-generated/bfloat16/api-testing/vsoxseg5ei16.c new file mode 100644 index 000000000..6f1d6f6ec --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsoxseg5ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsoxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16mf4x5(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16mf2x5(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16m1x5(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16mf4x5_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16mf2x5_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16m1x5_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsoxseg6ei16.c b/auto-generated/bfloat16/api-testing/vsoxseg6ei16.c new file mode 100644 index 000000000..50fca1660 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsoxseg6ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsoxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16mf4x6(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16mf2x6(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16m1x6(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16mf4x6_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16mf2x6_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16m1x6_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsoxseg7ei16.c b/auto-generated/bfloat16/api-testing/vsoxseg7ei16.c new file mode 100644 index 000000000..cff1eb034 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsoxseg7ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsoxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16mf4x7(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16mf2x7(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16m1x7(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16mf4x7_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16mf2x7_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16m1x7_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsoxseg8ei16.c b/auto-generated/bfloat16/api-testing/vsoxseg8ei16.c new file mode 100644 index 000000000..3dd02854a --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsoxseg8ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsoxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16mf4x8(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16mf2x8(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16m1x8(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16mf4x8_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16mf2x8_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16m1x8_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsse16.c b/auto-generated/bfloat16/api-testing/vsse16.c new file mode 100644 index 000000000..0ad6a14bf --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsse16.c @@ -0,0 +1,62 @@ +#include +#include + +void test_vsse16_v_bf16mf4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16mf4(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16mf2(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m1(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16m1(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16m2(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16m4(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16m8(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16mf4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16mf2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16m1_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16m2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16m4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16m8_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsseg2e16.c b/auto-generated/bfloat16/api-testing/vsseg2e16.c new file mode 100644 index 000000000..868ca48a1 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsseg2e16.c @@ -0,0 +1,47 @@ +#include +#include + +void test_vsseg2e16_v_bf16mf4x2(__bf16 *rs1, vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16mf4x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf2x2(__bf16 *rs1, vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16mf2x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m1x2(__bf16 *rs1, vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16m1x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m2x2(__bf16 *rs1, vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16m2x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m4x2(__bf16 *rs1, vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16m4x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16mf4x2_m(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16mf2x2_m(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16_v_bf16m1x2_m(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16_v_bf16m2x2_m(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16_v_bf16m4x2_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsseg3e16.c b/auto-generated/bfloat16/api-testing/vsseg3e16.c new file mode 100644 index 000000000..2859813e2 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsseg3e16.c @@ -0,0 +1,38 @@ +#include +#include + +void test_vsseg3e16_v_bf16mf4x3(__bf16 *rs1, vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16mf4x3(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf2x3(__bf16 *rs1, vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16mf2x3(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m1x3(__bf16 *rs1, vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16m1x3(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m2x3(__bf16 *rs1, vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16m2x3(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16mf4x3_m(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16mf2x3_m(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x3_t vs3, + size_t vl) { + return __riscv_vsseg3e16_v_bf16m1x3_m(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x3_t vs3, + size_t vl) { + return __riscv_vsseg3e16_v_bf16m2x3_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsseg4e16.c b/auto-generated/bfloat16/api-testing/vsseg4e16.c new file mode 100644 index 000000000..41132b932 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsseg4e16.c @@ -0,0 +1,38 @@ +#include +#include + +void test_vsseg4e16_v_bf16mf4x4(__bf16 *rs1, vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16mf4x4(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf2x4(__bf16 *rs1, vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16mf2x4(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m1x4(__bf16 *rs1, vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16m1x4(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m2x4(__bf16 *rs1, vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16m2x4(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16mf4x4_m(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16mf2x4_m(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x4_t vs3, + size_t vl) { + return __riscv_vsseg4e16_v_bf16m1x4_m(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x4_t vs3, + size_t vl) { + return __riscv_vsseg4e16_v_bf16m2x4_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsseg5e16.c b/auto-generated/bfloat16/api-testing/vsseg5e16.c new file mode 100644 index 000000000..e09575ab0 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsseg5e16.c @@ -0,0 +1,29 @@ +#include +#include + +void test_vsseg5e16_v_bf16mf4x5(__bf16 *rs1, vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16mf4x5(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf2x5(__bf16 *rs1, vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16mf2x5(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16m1x5(__bf16 *rs1, vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16m1x5(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16mf4x5_m(vm, rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16mf2x5_m(vm, rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x5_t vs3, + size_t vl) { + return __riscv_vsseg5e16_v_bf16m1x5_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsseg6e16.c b/auto-generated/bfloat16/api-testing/vsseg6e16.c new file mode 100644 index 000000000..5da413ae0 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsseg6e16.c @@ -0,0 +1,29 @@ +#include +#include + +void test_vsseg6e16_v_bf16mf4x6(__bf16 *rs1, vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16mf4x6(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf2x6(__bf16 *rs1, vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16mf2x6(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16m1x6(__bf16 *rs1, vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16m1x6(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16mf4x6_m(vm, rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16mf2x6_m(vm, rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x6_t vs3, + size_t vl) { + return __riscv_vsseg6e16_v_bf16m1x6_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsseg7e16.c b/auto-generated/bfloat16/api-testing/vsseg7e16.c new file mode 100644 index 000000000..c0674806e --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsseg7e16.c @@ -0,0 +1,29 @@ +#include +#include + +void test_vsseg7e16_v_bf16mf4x7(__bf16 *rs1, vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16mf4x7(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf2x7(__bf16 *rs1, vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16mf2x7(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16m1x7(__bf16 *rs1, vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16m1x7(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16mf4x7_m(vm, rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16mf2x7_m(vm, rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x7_t vs3, + size_t vl) { + return __riscv_vsseg7e16_v_bf16m1x7_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsseg8e16.c b/auto-generated/bfloat16/api-testing/vsseg8e16.c new file mode 100644 index 000000000..b508667c5 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsseg8e16.c @@ -0,0 +1,29 @@ +#include +#include + +void test_vsseg8e16_v_bf16mf4x8(__bf16 *rs1, vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16mf4x8(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf2x8(__bf16 *rs1, vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16mf2x8(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16m1x8(__bf16 *rs1, vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16m1x8(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16mf4x8_m(vm, rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16mf2x8_m(vm, rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x8_t vs3, + size_t vl) { + return __riscv_vsseg8e16_v_bf16m1x8_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vssseg2e16.c b/auto-generated/bfloat16/api-testing/vssseg2e16.c new file mode 100644 index 000000000..4befac177 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vssseg2e16.c @@ -0,0 +1,52 @@ +#include +#include + +void test_vssseg2e16_v_bf16mf4x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16mf4x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf2x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16mf2x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m1x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16_v_bf16m1x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m2x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16_v_bf16m2x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m4x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16_v_bf16m4x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16mf4x2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16mf2x2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16m1x2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16m2x2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16m4x2_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vssseg3e16.c b/auto-generated/bfloat16/api-testing/vssseg3e16.c new file mode 100644 index 000000000..329ef56ea --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vssseg3e16.c @@ -0,0 +1,42 @@ +#include +#include + +void test_vssseg3e16_v_bf16mf4x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16mf4x3(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf2x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16mf2x3(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m1x3(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x3_t vs3, + size_t vl) { + return __riscv_vssseg3e16_v_bf16m1x3(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m2x3(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x3_t vs3, + size_t vl) { + return __riscv_vssseg3e16_v_bf16m2x3(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16mf4x3_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16mf2x3_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16m1x3_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16m2x3_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vssseg4e16.c b/auto-generated/bfloat16/api-testing/vssseg4e16.c new file mode 100644 index 000000000..91646e642 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vssseg4e16.c @@ -0,0 +1,42 @@ +#include +#include + +void test_vssseg4e16_v_bf16mf4x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16mf4x4(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf2x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16mf2x4(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m1x4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x4_t vs3, + size_t vl) { + return __riscv_vssseg4e16_v_bf16m1x4(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m2x4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x4_t vs3, + size_t vl) { + return __riscv_vssseg4e16_v_bf16m2x4(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16mf4x4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16mf2x4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16m1x4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16m2x4_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vssseg5e16.c b/auto-generated/bfloat16/api-testing/vssseg5e16.c new file mode 100644 index 000000000..a1e4430d3 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vssseg5e16.c @@ -0,0 +1,32 @@ +#include +#include + +void test_vssseg5e16_v_bf16mf4x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16mf4x5(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf2x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16mf2x5(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16m1x5(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x5_t vs3, + size_t vl) { + return __riscv_vssseg5e16_v_bf16m1x5(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16mf4x5_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16mf2x5_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16m1x5_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vssseg6e16.c b/auto-generated/bfloat16/api-testing/vssseg6e16.c new file mode 100644 index 000000000..1f807f889 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vssseg6e16.c @@ -0,0 +1,32 @@ +#include +#include + +void test_vssseg6e16_v_bf16mf4x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16mf4x6(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf2x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16mf2x6(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16m1x6(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x6_t vs3, + size_t vl) { + return __riscv_vssseg6e16_v_bf16m1x6(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16mf4x6_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16mf2x6_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16m1x6_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vssseg7e16.c b/auto-generated/bfloat16/api-testing/vssseg7e16.c new file mode 100644 index 000000000..0ac2db471 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vssseg7e16.c @@ -0,0 +1,32 @@ +#include +#include + +void test_vssseg7e16_v_bf16mf4x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16mf4x7(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf2x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16mf2x7(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16m1x7(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x7_t vs3, + size_t vl) { + return __riscv_vssseg7e16_v_bf16m1x7(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16mf4x7_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16mf2x7_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16m1x7_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vssseg8e16.c b/auto-generated/bfloat16/api-testing/vssseg8e16.c new file mode 100644 index 000000000..864344540 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vssseg8e16.c @@ -0,0 +1,32 @@ +#include +#include + +void test_vssseg8e16_v_bf16mf4x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16mf4x8(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf2x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16mf2x8(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16m1x8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x8_t vs3, + size_t vl) { + return __riscv_vssseg8e16_v_bf16m1x8(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16mf4x8_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16mf2x8_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16m1x8_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsuxei16.c b/auto-generated/bfloat16/api-testing/vsuxei16.c new file mode 100644 index 000000000..440ee93fe --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsuxei16.c @@ -0,0 +1,62 @@ +#include +#include + +void test_vsuxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16mf4(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16mf2(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16m1(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16m2(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16m4(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16m8(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16mf4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16mf2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16m1_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16m2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16m4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16m8_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsuxseg2ei16.c b/auto-generated/bfloat16/api-testing/vsuxseg2ei16.c new file mode 100644 index 000000000..03827f92a --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsuxseg2ei16.c @@ -0,0 +1,54 @@ +#include +#include + +void test_vsuxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16mf4x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16mf2x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m1x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m2x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m4x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16mf4x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16mf2x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m1x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m2x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m4x2_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsuxseg3ei16.c b/auto-generated/bfloat16/api-testing/vsuxseg3ei16.c new file mode 100644 index 000000000..4e3698506 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsuxseg3ei16.c @@ -0,0 +1,44 @@ +#include +#include + +void test_vsuxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16mf4x3(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16mf2x3(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16m1x3(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16m2x3(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16mf4x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16mf2x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16m1x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16m2x3_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsuxseg4ei16.c b/auto-generated/bfloat16/api-testing/vsuxseg4ei16.c new file mode 100644 index 000000000..fda4e5e7e --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsuxseg4ei16.c @@ -0,0 +1,44 @@ +#include +#include + +void test_vsuxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16mf4x4(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16mf2x4(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16m1x4(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16m2x4(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16mf4x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16mf2x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16m1x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16m2x4_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsuxseg5ei16.c b/auto-generated/bfloat16/api-testing/vsuxseg5ei16.c new file mode 100644 index 000000000..07689a012 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsuxseg5ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsuxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16mf4x5(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16mf2x5(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16m1x5(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16mf4x5_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16mf2x5_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16m1x5_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsuxseg6ei16.c b/auto-generated/bfloat16/api-testing/vsuxseg6ei16.c new file mode 100644 index 000000000..8df400e67 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsuxseg6ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsuxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16mf4x6(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16mf2x6(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16m1x6(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16mf4x6_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16mf2x6_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16m1x6_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsuxseg7ei16.c b/auto-generated/bfloat16/api-testing/vsuxseg7ei16.c new file mode 100644 index 000000000..b2408d17e --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsuxseg7ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsuxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16mf4x7(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16mf2x7(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16m1x7(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16mf4x7_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16mf2x7_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16m1x7_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vsuxseg8ei16.c b/auto-generated/bfloat16/api-testing/vsuxseg8ei16.c new file mode 100644 index 000000000..195aa60b8 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vsuxseg8ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsuxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16mf4x8(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16mf2x8(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16m1x8(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16mf4x8_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16mf2x8_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16m1x8_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vundefined.c b/auto-generated/bfloat16/api-testing/vundefined.c new file mode 100644 index 000000000..13a91ae9d --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vundefined.c @@ -0,0 +1,118 @@ +#include +#include + +vbfloat16mf4_t test_vundefined_bf16mf4() { + return __riscv_vundefined_bf16mf4(); +} + +vbfloat16mf2_t test_vundefined_bf16mf2() { + return __riscv_vundefined_bf16mf2(); +} + +vbfloat16m1_t test_vundefined_bf16m1() { return __riscv_vundefined_bf16m1(); } + +vbfloat16m2_t test_vundefined_bf16m2() { return __riscv_vundefined_bf16m2(); } + +vbfloat16m4_t test_vundefined_bf16m4() { return __riscv_vundefined_bf16m4(); } + +vbfloat16m8_t test_vundefined_bf16m8() { return __riscv_vundefined_bf16m8(); } + +vbfloat16mf4x2_t test_vundefined_bf16mf4x2() { + return __riscv_vundefined_bf16mf4x2(); +} + +vbfloat16mf4x3_t test_vundefined_bf16mf4x3() { + return __riscv_vundefined_bf16mf4x3(); +} + +vbfloat16mf4x4_t test_vundefined_bf16mf4x4() { + return __riscv_vundefined_bf16mf4x4(); +} + +vbfloat16mf4x5_t test_vundefined_bf16mf4x5() { + return __riscv_vundefined_bf16mf4x5(); +} + +vbfloat16mf4x6_t test_vundefined_bf16mf4x6() { + return __riscv_vundefined_bf16mf4x6(); +} + +vbfloat16mf4x7_t test_vundefined_bf16mf4x7() { + return __riscv_vundefined_bf16mf4x7(); +} + +vbfloat16mf4x8_t test_vundefined_bf16mf4x8() { + return __riscv_vundefined_bf16mf4x8(); +} + +vbfloat16mf2x2_t test_vundefined_bf16mf2x2() { + return __riscv_vundefined_bf16mf2x2(); +} + +vbfloat16mf2x3_t test_vundefined_bf16mf2x3() { + return __riscv_vundefined_bf16mf2x3(); +} + +vbfloat16mf2x4_t test_vundefined_bf16mf2x4() { + return __riscv_vundefined_bf16mf2x4(); +} + +vbfloat16mf2x5_t test_vundefined_bf16mf2x5() { + return __riscv_vundefined_bf16mf2x5(); +} + +vbfloat16mf2x6_t test_vundefined_bf16mf2x6() { + return __riscv_vundefined_bf16mf2x6(); +} + +vbfloat16mf2x7_t test_vundefined_bf16mf2x7() { + return __riscv_vundefined_bf16mf2x7(); +} + +vbfloat16mf2x8_t test_vundefined_bf16mf2x8() { + return __riscv_vundefined_bf16mf2x8(); +} + +vbfloat16m1x2_t test_vundefined_bf16m1x2() { + return __riscv_vundefined_bf16m1x2(); +} + +vbfloat16m1x3_t test_vundefined_bf16m1x3() { + return __riscv_vundefined_bf16m1x3(); +} + +vbfloat16m1x4_t test_vundefined_bf16m1x4() { + return __riscv_vundefined_bf16m1x4(); +} + +vbfloat16m1x5_t test_vundefined_bf16m1x5() { + return __riscv_vundefined_bf16m1x5(); +} + +vbfloat16m1x6_t test_vundefined_bf16m1x6() { + return __riscv_vundefined_bf16m1x6(); +} + +vbfloat16m1x7_t test_vundefined_bf16m1x7() { + return __riscv_vundefined_bf16m1x7(); +} + +vbfloat16m1x8_t test_vundefined_bf16m1x8() { + return __riscv_vundefined_bf16m1x8(); +} + +vbfloat16m2x2_t test_vundefined_bf16m2x2() { + return __riscv_vundefined_bf16m2x2(); +} + +vbfloat16m2x3_t test_vundefined_bf16m2x3() { + return __riscv_vundefined_bf16m2x3(); +} + +vbfloat16m2x4_t test_vundefined_bf16m2x4() { + return __riscv_vundefined_bf16m2x4(); +} + +vbfloat16m4x2_t test_vundefined_bf16m4x2() { + return __riscv_vundefined_bf16m4x2(); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vcreate.c b/auto-generated/bfloat16/llvm-api-tests/vcreate.c new file mode 100644 index 000000000..7e58be2af --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vcreate.c @@ -0,0 +1,181 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16m2_t test_vcreate_v_bf16m1_bf16m2(vbfloat16m1_t v0, vbfloat16m1_t v1) { + return __riscv_vcreate_v_bf16m1_bf16m2(v0, v1); +} + +vbfloat16m4_t test_vcreate_v_bf16m1_bf16m4(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3) { + return __riscv_vcreate_v_bf16m1_bf16m4(v0, v1, v2, v3); +} + +vbfloat16m8_t test_vcreate_v_bf16m1_bf16m8(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6, vbfloat16m1_t v7) { + return __riscv_vcreate_v_bf16m1_bf16m8(v0, v1, v2, v3, v4, v5, v6, v7); +} + +vbfloat16m4_t test_vcreate_v_bf16m2_bf16m4(vbfloat16m2_t v0, vbfloat16m2_t v1) { + return __riscv_vcreate_v_bf16m2_bf16m4(v0, v1); +} + +vbfloat16m8_t test_vcreate_v_bf16m2_bf16m8(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2, vbfloat16m2_t v3) { + return __riscv_vcreate_v_bf16m2_bf16m8(v0, v1, v2, v3); +} + +vbfloat16m8_t test_vcreate_v_bf16m4_bf16m8(vbfloat16m4_t v0, vbfloat16m4_t v1) { + return __riscv_vcreate_v_bf16m4_bf16m8(v0, v1); +} + +vbfloat16mf4x2_t test_vcreate_v_bf16mf4x2(vbfloat16mf4_t v0, + vbfloat16mf4_t v1) { + return __riscv_vcreate_v_bf16mf4x2(v0, v1); +} + +vbfloat16mf4x3_t test_vcreate_v_bf16mf4x3(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2) { + return __riscv_vcreate_v_bf16mf4x3(v0, v1, v2); +} + +vbfloat16mf4x4_t test_vcreate_v_bf16mf4x4(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, + vbfloat16mf4_t v3) { + return __riscv_vcreate_v_bf16mf4x4(v0, v1, v2, v3); +} + +vbfloat16mf4x5_t test_vcreate_v_bf16mf4x5(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4) { + return __riscv_vcreate_v_bf16mf4x5(v0, v1, v2, v3, v4); +} + +vbfloat16mf4x6_t test_vcreate_v_bf16mf4x6(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, + vbfloat16mf4_t v5) { + return __riscv_vcreate_v_bf16mf4x6(v0, v1, v2, v3, v4, v5); +} + +vbfloat16mf4x7_t test_vcreate_v_bf16mf4x7(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5, + vbfloat16mf4_t v6) { + return __riscv_vcreate_v_bf16mf4x7(v0, v1, v2, v3, v4, v5, v6); +} + +vbfloat16mf4x8_t test_vcreate_v_bf16mf4x8(vbfloat16mf4_t v0, vbfloat16mf4_t v1, + vbfloat16mf4_t v2, vbfloat16mf4_t v3, + vbfloat16mf4_t v4, vbfloat16mf4_t v5, + vbfloat16mf4_t v6, + vbfloat16mf4_t v7) { + return __riscv_vcreate_v_bf16mf4x8(v0, v1, v2, v3, v4, v5, v6, v7); +} + +vbfloat16mf2x2_t test_vcreate_v_bf16mf2x2(vbfloat16mf2_t v0, + vbfloat16mf2_t v1) { + return __riscv_vcreate_v_bf16mf2x2(v0, v1); +} + +vbfloat16mf2x3_t test_vcreate_v_bf16mf2x3(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2) { + return __riscv_vcreate_v_bf16mf2x3(v0, v1, v2); +} + +vbfloat16mf2x4_t test_vcreate_v_bf16mf2x4(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, + vbfloat16mf2_t v3) { + return __riscv_vcreate_v_bf16mf2x4(v0, v1, v2, v3); +} + +vbfloat16mf2x5_t test_vcreate_v_bf16mf2x5(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4) { + return __riscv_vcreate_v_bf16mf2x5(v0, v1, v2, v3, v4); +} + +vbfloat16mf2x6_t test_vcreate_v_bf16mf2x6(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, + vbfloat16mf2_t v5) { + return __riscv_vcreate_v_bf16mf2x6(v0, v1, v2, v3, v4, v5); +} + +vbfloat16mf2x7_t test_vcreate_v_bf16mf2x7(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5, + vbfloat16mf2_t v6) { + return __riscv_vcreate_v_bf16mf2x7(v0, v1, v2, v3, v4, v5, v6); +} + +vbfloat16mf2x8_t test_vcreate_v_bf16mf2x8(vbfloat16mf2_t v0, vbfloat16mf2_t v1, + vbfloat16mf2_t v2, vbfloat16mf2_t v3, + vbfloat16mf2_t v4, vbfloat16mf2_t v5, + vbfloat16mf2_t v6, + vbfloat16mf2_t v7) { + return __riscv_vcreate_v_bf16mf2x8(v0, v1, v2, v3, v4, v5, v6, v7); +} + +vbfloat16m1x2_t test_vcreate_v_bf16m1x2(vbfloat16m1_t v0, vbfloat16m1_t v1) { + return __riscv_vcreate_v_bf16m1x2(v0, v1); +} + +vbfloat16m1x3_t test_vcreate_v_bf16m1x3(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2) { + return __riscv_vcreate_v_bf16m1x3(v0, v1, v2); +} + +vbfloat16m1x4_t test_vcreate_v_bf16m1x4(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3) { + return __riscv_vcreate_v_bf16m1x4(v0, v1, v2, v3); +} + +vbfloat16m1x5_t test_vcreate_v_bf16m1x5(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4) { + return __riscv_vcreate_v_bf16m1x5(v0, v1, v2, v3, v4); +} + +vbfloat16m1x6_t test_vcreate_v_bf16m1x6(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5) { + return __riscv_vcreate_v_bf16m1x6(v0, v1, v2, v3, v4, v5); +} + +vbfloat16m1x7_t test_vcreate_v_bf16m1x7(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6) { + return __riscv_vcreate_v_bf16m1x7(v0, v1, v2, v3, v4, v5, v6); +} + +vbfloat16m1x8_t test_vcreate_v_bf16m1x8(vbfloat16m1_t v0, vbfloat16m1_t v1, + vbfloat16m1_t v2, vbfloat16m1_t v3, + vbfloat16m1_t v4, vbfloat16m1_t v5, + vbfloat16m1_t v6, vbfloat16m1_t v7) { + return __riscv_vcreate_v_bf16m1x8(v0, v1, v2, v3, v4, v5, v6, v7); +} + +vbfloat16m2x2_t test_vcreate_v_bf16m2x2(vbfloat16m2_t v0, vbfloat16m2_t v1) { + return __riscv_vcreate_v_bf16m2x2(v0, v1); +} + +vbfloat16m2x3_t test_vcreate_v_bf16m2x3(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2) { + return __riscv_vcreate_v_bf16m2x3(v0, v1, v2); +} + +vbfloat16m2x4_t test_vcreate_v_bf16m2x4(vbfloat16m2_t v0, vbfloat16m2_t v1, + vbfloat16m2_t v2, vbfloat16m2_t v3) { + return __riscv_vcreate_v_bf16m2x4(v0, v1, v2, v3); +} + +vbfloat16m4x2_t test_vcreate_v_bf16m4x2(vbfloat16m4_t v0, vbfloat16m4_t v1) { + return __riscv_vcreate_v_bf16m4x2(v0, v1); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vget.c b/auto-generated/bfloat16/llvm-api-tests/vget.c new file mode 100644 index 000000000..e2ff800e2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vget.c @@ -0,0 +1,145 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16m1_t test_vget_v_bf16m2_bf16m1(vbfloat16m2_t src, size_t index) { + return __riscv_vget_v_bf16m2_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m4_bf16m1(vbfloat16m4_t src, size_t index) { + return __riscv_vget_v_bf16m4_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m8_bf16m1(vbfloat16m8_t src, size_t index) { + return __riscv_vget_v_bf16m8_bf16m1(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m4_bf16m2(vbfloat16m4_t src, size_t index) { + return __riscv_vget_v_bf16m4_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m8_bf16m2(vbfloat16m8_t src, size_t index) { + return __riscv_vget_v_bf16m8_bf16m2(src, 0); +} + +vbfloat16m4_t test_vget_v_bf16m8_bf16m4(vbfloat16m8_t src, size_t index) { + return __riscv_vget_v_bf16m8_bf16m4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x2_bf16mf4(vbfloat16mf4x2_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x2_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x3_bf16mf4(vbfloat16mf4x3_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x3_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x4_bf16mf4(vbfloat16mf4x4_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x4_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x5_bf16mf4(vbfloat16mf4x5_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x5_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x6_bf16mf4(vbfloat16mf4x6_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x6_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x7_bf16mf4(vbfloat16mf4x7_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x7_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x8_bf16mf4(vbfloat16mf4x8_t src, + size_t index) { + return __riscv_vget_v_bf16mf4x8_bf16mf4(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x2_bf16mf2(vbfloat16mf2x2_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x2_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x3_bf16mf2(vbfloat16mf2x3_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x3_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x4_bf16mf2(vbfloat16mf2x4_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x4_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x5_bf16mf2(vbfloat16mf2x5_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x5_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x6_bf16mf2(vbfloat16mf2x6_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x6_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x7_bf16mf2(vbfloat16mf2x7_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x7_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x8_bf16mf2(vbfloat16mf2x8_t src, + size_t index) { + return __riscv_vget_v_bf16mf2x8_bf16mf2(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x2_bf16m1(vbfloat16m1x2_t src, size_t index) { + return __riscv_vget_v_bf16m1x2_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x3_bf16m1(vbfloat16m1x3_t src, size_t index) { + return __riscv_vget_v_bf16m1x3_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x4_bf16m1(vbfloat16m1x4_t src, size_t index) { + return __riscv_vget_v_bf16m1x4_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x5_bf16m1(vbfloat16m1x5_t src, size_t index) { + return __riscv_vget_v_bf16m1x5_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x6_bf16m1(vbfloat16m1x6_t src, size_t index) { + return __riscv_vget_v_bf16m1x6_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x7_bf16m1(vbfloat16m1x7_t src, size_t index) { + return __riscv_vget_v_bf16m1x7_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x8_bf16m1(vbfloat16m1x8_t src, size_t index) { + return __riscv_vget_v_bf16m1x8_bf16m1(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x2_bf16m2(vbfloat16m2x2_t src, size_t index) { + return __riscv_vget_v_bf16m2x2_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x3_bf16m2(vbfloat16m2x3_t src, size_t index) { + return __riscv_vget_v_bf16m2x3_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x4_bf16m2(vbfloat16m2x4_t src, size_t index) { + return __riscv_vget_v_bf16m2x4_bf16m2(src, 0); +} + +vbfloat16m4_t test_vget_v_bf16m4x2_bf16m4(vbfloat16m4x2_t src, size_t index) { + return __riscv_vget_v_bf16m4x2_bf16m4(src, 0); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vle16.c b/auto-generated/bfloat16/llvm-api-tests/vle16.c new file mode 100644 index 000000000..706e5a3d2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vle16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vle16_v_bf16mf4(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4(rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2(rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1(rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2(rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4(rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8(const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8(rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16mf4_m(vm, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16mf2_m(vm, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16m1_m(vm, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_m(vm, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_m(vm, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/llvm-api-tests/vle16ff.c new file mode 100644 index 000000000..d11f38c52 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vle16ff.c @@ -0,0 +1,67 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vle16ff_v_bf16mf4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf4(rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf2(rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m1(rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m2(rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m4(rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m8(rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf4_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m1_m(vm, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m4_m(vm, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m8_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c b/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c new file mode 100644 index 000000000..11c86330d --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf2_t test_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16m1(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_b16mf4_b16m8(value); +} + +vbfloat16m1_t test_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_b16mf2_b16m1(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_b16mf2_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_b16mf2_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_b16mf2_b16m8(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_b16m1_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_b16m1_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_b16m1_b16m8(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value) { + return __riscv_vlmul_ext_v_b16m2_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value) { + return __riscv_vlmul_ext_v_b16m2_b16m8(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value) { + return __riscv_vlmul_ext_v_b16m4_b16m8(value); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c b/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c new file mode 100644 index 000000000..dcb7ffdad --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value) { + return __riscv_vlmul_trunc_v_b16mf2_b16mf4(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_v_b16m1_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_v_b16m1_b16mf2(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_b16m2_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_b16m2_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_b16m2_b16m1(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_b16m4_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_b16m4_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_b16m4_b16m1(value); +} + +vbfloat16m2_t test_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_b16m4_b16m2(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16m1(value); +} + +vbfloat16m2_t test_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16m2(value); +} + +vbfloat16m4_t test_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_b16m8_b16m4(value); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxei16.c new file mode 100644 index 000000000..b6f66f876 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vloxei16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vloxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf4(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf2(rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m1(rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m2(rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m4(rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m8(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m1_m(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m4_m(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c new file mode 100644 index 000000000..0f665b784 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c new file mode 100644 index 000000000..e7230dbfb --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c new file mode 100644 index 000000000..c6cd684be --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c new file mode 100644 index 000000000..d182402b2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c new file mode 100644 index 000000000..331b62970 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c new file mode 100644 index 000000000..82512cb83 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c new file mode 100644 index 000000000..c58f38d51 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/llvm-api-tests/vlse16.c new file mode 100644 index 000000000..6022f983b --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlse16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vlse16_v_bf16mf4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf4(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf2(rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m1(rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m2(rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m4(rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m8(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m1_m(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m4_m(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c new file mode 100644 index 000000000..04cbe00c1 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c @@ -0,0 +1,51 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2(rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2(rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2(rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2(rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2(rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_m(vm, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_m(vm, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_m(vm, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_m(vm, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c new file mode 100644 index 000000000..a4d658aaa --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c @@ -0,0 +1,57 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2(rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2(rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2(rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2(rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2(rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_m(vm, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c new file mode 100644 index 000000000..6d369cd3b --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c @@ -0,0 +1,42 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3(rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3(rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3(rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3(rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_m(vm, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_m(vm, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_m(vm, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c new file mode 100644 index 000000000..255f184fc --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3(rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3(rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3(rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3(rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_m(vm, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c new file mode 100644 index 000000000..438025115 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c @@ -0,0 +1,42 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4(rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4(rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4(rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4(rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_m(vm, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_m(vm, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_m(vm, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c new file mode 100644 index 000000000..cb31af531 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4(rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4(rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4(rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4(rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_m(vm, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c new file mode 100644 index 000000000..df0aa5c75 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5(rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5(rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5(rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_m(vm, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_m(vm, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c new file mode 100644 index 000000000..d8266918a --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c @@ -0,0 +1,37 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5(rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5(rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5(rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c new file mode 100644 index 000000000..a491aed92 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6(rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6(rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6(rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_m(vm, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_m(vm, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c new file mode 100644 index 000000000..23045a077 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c @@ -0,0 +1,37 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6(rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6(rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6(rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c new file mode 100644 index 000000000..db9b1d308 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7(rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7(rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7(rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_m(vm, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_m(vm, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c new file mode 100644 index 000000000..55c892349 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c @@ -0,0 +1,37 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7(rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7(rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7(rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c new file mode 100644 index 000000000..573492dd1 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8(rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8(rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8(const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8(rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_m(vm, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_m(vm, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_m(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c new file mode 100644 index 000000000..ff2c20890 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c @@ -0,0 +1,37 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8(rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8(rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8(const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8(rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_m(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_m(vm, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_m(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c new file mode 100644 index 000000000..638e86ea2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c @@ -0,0 +1,56 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c new file mode 100644 index 000000000..6a4d657ba --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c @@ -0,0 +1,46 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c new file mode 100644 index 000000000..482158e65 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c @@ -0,0 +1,46 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c new file mode 100644 index 000000000..39ef6a491 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c new file mode 100644 index 000000000..df164cc46 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c new file mode 100644 index 000000000..cbb3b4ba2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c new file mode 100644 index 000000000..47d2c6b78 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8(const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxei16.c new file mode 100644 index 000000000..ae522bdb5 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vluxei16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vluxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf4(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf2(rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m1(rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m2(rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m4(rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m8(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m1_m(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m4_m(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c new file mode 100644 index 000000000..f8aaf16c9 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_m(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c new file mode 100644 index 000000000..879699ef3 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c new file mode 100644 index 000000000..f0a39b2f6 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_m(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c new file mode 100644 index 000000000..f5204e631 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c new file mode 100644 index 000000000..e45755a1f --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c new file mode 100644 index 000000000..c65fe2725 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c new file mode 100644 index 000000000..c2c40bd07 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_m(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_m(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_m(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c new file mode 100644 index 000000000..fbd501fa3 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c @@ -0,0 +1,103 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vreinterpret_v_i16mf4_bf16mf4(vint16mf4_t src) { + return __riscv_vreinterpret_v_i16mf4_bf16mf4(src); +} + +vbfloat16mf2_t test_vreinterpret_v_i16mf2_bf16mf2(vint16mf2_t src) { + return __riscv_vreinterpret_v_i16mf2_bf16mf2(src); +} + +vbfloat16m1_t test_vreinterpret_v_i16m1_bf16m1(vint16m1_t src) { + return __riscv_vreinterpret_v_i16m1_bf16m1(src); +} + +vbfloat16m2_t test_vreinterpret_v_i16m2_bf16m2(vint16m2_t src) { + return __riscv_vreinterpret_v_i16m2_bf16m2(src); +} + +vbfloat16m4_t test_vreinterpret_v_i16m4_bf16m4(vint16m4_t src) { + return __riscv_vreinterpret_v_i16m4_bf16m4(src); +} + +vbfloat16m8_t test_vreinterpret_v_i16m8_bf16m8(vint16m8_t src) { + return __riscv_vreinterpret_v_i16m8_bf16m8(src); +} + +vbfloat16mf4_t test_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src) { + return __riscv_vreinterpret_v_ui16mf4_bf16mf4(src); +} + +vbfloat16mf2_t test_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src) { + return __riscv_vreinterpret_v_ui16mf2_bf16mf2(src); +} + +vbfloat16m1_t test_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src) { + return __riscv_vreinterpret_v_ui16m1_bf16m1(src); +} + +vbfloat16m2_t test_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src) { + return __riscv_vreinterpret_v_ui16m2_bf16m2(src); +} + +vbfloat16m4_t test_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src) { + return __riscv_vreinterpret_v_ui16m4_bf16m4(src); +} + +vbfloat16m8_t test_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src) { + return __riscv_vreinterpret_v_ui16m8_bf16m8(src); +} + +vint16mf4_t test_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_v_bf16mf4_i16mf4(src); +} + +vint16mf2_t test_vreinterpret_v_bf16mf2_i16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_v_bf16mf2_i16mf2(src); +} + +vint16m1_t test_vreinterpret_v_bf16m1_i16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_v_bf16m1_i16m1(src); +} + +vint16m2_t test_vreinterpret_v_bf16m2_i16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_v_bf16m2_i16m2(src); +} + +vint16m4_t test_vreinterpret_v_bf16m4_i16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_v_bf16m4_i16m4(src); +} + +vint16m8_t test_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_v_bf16m8_i16m8(src); +} + +vuint16mf4_t test_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_v_bf16mf4_ui16mf4(src); +} + +vuint16mf2_t test_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_v_bf16mf2_ui16mf2(src); +} + +vuint16m1_t test_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_v_bf16m1_ui16m1(src); +} + +vuint16m2_t test_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_v_bf16m2_ui16m2(src); +} + +vuint16m4_t test_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_v_bf16m4_ui16m4(src); +} + +vuint16m8_t test_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_v_bf16m8_ui16m8(src); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vse16.c b/auto-generated/bfloat16/llvm-api-tests/vse16.c new file mode 100644 index 000000000..c08e753e3 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vse16.c @@ -0,0 +1,60 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vse16_v_bf16mf4(__bf16 *rs1, vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vse16_v_bf16mf4(rs1, vs3, vl); +} + +void test_vse16_v_bf16mf2(__bf16 *rs1, vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vse16_v_bf16mf2(rs1, vs3, vl); +} + +void test_vse16_v_bf16m1(__bf16 *rs1, vbfloat16m1_t vs3, size_t vl) { + return __riscv_vse16_v_bf16m1(rs1, vs3, vl); +} + +void test_vse16_v_bf16m2(__bf16 *rs1, vbfloat16m2_t vs3, size_t vl) { + return __riscv_vse16_v_bf16m2(rs1, vs3, vl); +} + +void test_vse16_v_bf16m4(__bf16 *rs1, vbfloat16m4_t vs3, size_t vl) { + return __riscv_vse16_v_bf16m4(rs1, vs3, vl); +} + +void test_vse16_v_bf16m8(__bf16 *rs1, vbfloat16m8_t vs3, size_t vl) { + return __riscv_vse16_v_bf16m8(rs1, vs3, vl); +} + +void test_vse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16mf4_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16mf2_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16m1_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16m2_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16m4_m(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vse16_v_bf16m8_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vset.c b/auto-generated/bfloat16/llvm-api-tests/vset.c new file mode 100644 index 000000000..684944f27 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vset.c @@ -0,0 +1,175 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16m2_t test_vset_v_bf16m1_bf16m2(vbfloat16m2_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m2(dest, 0, value); +} + +vbfloat16m4_t test_vset_v_bf16m1_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m4(dest, 0, value); +} + +vbfloat16m4_t test_vset_v_bf16m2_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m4(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m1_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m8(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m2_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m8(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m4_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m4_t value) { + return __riscv_vset_v_bf16m4_bf16m8(dest, 0, value); +} + +vbfloat16mf4x2_t test_vset_v_bf16mf4_bf16mf4x2(vbfloat16mf4x2_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x2(dest, 0, value); +} + +vbfloat16mf4x3_t test_vset_v_bf16mf4_bf16mf4x3(vbfloat16mf4x3_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x3(dest, 0, value); +} + +vbfloat16mf4x4_t test_vset_v_bf16mf4_bf16mf4x4(vbfloat16mf4x4_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x4(dest, 0, value); +} + +vbfloat16mf4x5_t test_vset_v_bf16mf4_bf16mf4x5(vbfloat16mf4x5_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x5(dest, 0, value); +} + +vbfloat16mf4x6_t test_vset_v_bf16mf4_bf16mf4x6(vbfloat16mf4x6_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x6(dest, 0, value); +} + +vbfloat16mf4x7_t test_vset_v_bf16mf4_bf16mf4x7(vbfloat16mf4x7_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x7(dest, 0, value); +} + +vbfloat16mf4x8_t test_vset_v_bf16mf4_bf16mf4x8(vbfloat16mf4x8_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset_v_bf16mf4_bf16mf4x8(dest, 0, value); +} + +vbfloat16mf2x2_t test_vset_v_bf16mf2_bf16mf2x2(vbfloat16mf2x2_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x2(dest, 0, value); +} + +vbfloat16mf2x3_t test_vset_v_bf16mf2_bf16mf2x3(vbfloat16mf2x3_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x3(dest, 0, value); +} + +vbfloat16mf2x4_t test_vset_v_bf16mf2_bf16mf2x4(vbfloat16mf2x4_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x4(dest, 0, value); +} + +vbfloat16mf2x5_t test_vset_v_bf16mf2_bf16mf2x5(vbfloat16mf2x5_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x5(dest, 0, value); +} + +vbfloat16mf2x6_t test_vset_v_bf16mf2_bf16mf2x6(vbfloat16mf2x6_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x6(dest, 0, value); +} + +vbfloat16mf2x7_t test_vset_v_bf16mf2_bf16mf2x7(vbfloat16mf2x7_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x7(dest, 0, value); +} + +vbfloat16mf2x8_t test_vset_v_bf16mf2_bf16mf2x8(vbfloat16mf2x8_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset_v_bf16mf2_bf16mf2x8(dest, 0, value); +} + +vbfloat16m1x2_t test_vset_v_bf16m1_bf16m1x2(vbfloat16m1x2_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x2(dest, 0, value); +} + +vbfloat16m1x3_t test_vset_v_bf16m1_bf16m1x3(vbfloat16m1x3_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x3(dest, 0, value); +} + +vbfloat16m1x4_t test_vset_v_bf16m1_bf16m1x4(vbfloat16m1x4_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x4(dest, 0, value); +} + +vbfloat16m1x5_t test_vset_v_bf16m1_bf16m1x5(vbfloat16m1x5_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x5(dest, 0, value); +} + +vbfloat16m1x6_t test_vset_v_bf16m1_bf16m1x6(vbfloat16m1x6_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x6(dest, 0, value); +} + +vbfloat16m1x7_t test_vset_v_bf16m1_bf16m1x7(vbfloat16m1x7_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x7(dest, 0, value); +} + +vbfloat16m1x8_t test_vset_v_bf16m1_bf16m1x8(vbfloat16m1x8_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset_v_bf16m1_bf16m1x8(dest, 0, value); +} + +vbfloat16m2x2_t test_vset_v_bf16m2_bf16m2x2(vbfloat16m2x2_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m2x2(dest, 0, value); +} + +vbfloat16m2x3_t test_vset_v_bf16m2_bf16m2x3(vbfloat16m2x3_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m2x3(dest, 0, value); +} + +vbfloat16m2x4_t test_vset_v_bf16m2_bf16m2x4(vbfloat16m2x4_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset_v_bf16m2_bf16m2x4(dest, 0, value); +} + +vbfloat16m4x2_t test_vset_v_bf16m4_bf16m4x2(vbfloat16m4x2_t dest, size_t index, + vbfloat16m4_t value) { + return __riscv_vset_v_bf16m4_bf16m4x2(dest, 0, value); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c new file mode 100644 index 000000000..687d1ca3e --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16mf4(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16mf2(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16m1(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16m2(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16m4(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsoxei16_v_bf16m8(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16mf4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16mf2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16m1_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16m2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16m4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsoxei16_v_bf16m8_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c new file mode 100644 index 000000000..bb7579a9e --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16mf4x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16mf2x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m1x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m2x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m4x2(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16mf4x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16mf2x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m1x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m2x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16_v_bf16m4x2_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c new file mode 100644 index 000000000..bc1ccabdc --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16mf4x3(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16mf2x3(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16m1x3(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16m2x3(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16mf4x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16mf2x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16m1x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16_v_bf16m2x3_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c new file mode 100644 index 000000000..72343e757 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16mf4x4(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16mf2x4(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16m1x4(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16m2x4(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16mf4x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16mf2x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16m1x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16_v_bf16m2x4_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c new file mode 100644 index 000000000..418bf6b76 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16mf4x5(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16mf2x5(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16m1x5(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16mf4x5_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16mf2x5_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16_v_bf16m1x5_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c new file mode 100644 index 000000000..d8b35331f --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16mf4x6(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16mf2x6(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16m1x6(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16mf4x6_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16mf2x6_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16_v_bf16m1x6_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c new file mode 100644 index 000000000..b4a0b7ad9 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16mf4x7(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16mf2x7(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16m1x7(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16mf4x7_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16mf2x7_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16_v_bf16m1x7_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c new file mode 100644 index 000000000..2ae10d065 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16mf4x8(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16mf2x8(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16m1x8(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16mf4x8_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16mf2x8_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16_v_bf16m1x8_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsse16.c b/auto-generated/bfloat16/llvm-api-tests/vsse16.c new file mode 100644 index 000000000..b14ca1790 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsse16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsse16_v_bf16mf4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16mf4(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16mf2(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m1(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16m1(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16m2(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16m4(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsse16_v_bf16m8(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16mf4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16mf2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16m1_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16m2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16m4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsse16_v_bf16m8_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c new file mode 100644 index 000000000..149887fd7 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c @@ -0,0 +1,51 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg2e16_v_bf16mf4x2(__bf16 *rs1, vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16mf4x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf2x2(__bf16 *rs1, vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16mf2x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m1x2(__bf16 *rs1, vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16m1x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m2x2(__bf16 *rs1, vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16m2x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m4x2(__bf16 *rs1, vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16m4x2(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16mf4x2_m(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16_v_bf16mf2x2_m(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16_v_bf16m1x2_m(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16_v_bf16m2x2_m(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16_v_bf16m4x2_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c new file mode 100644 index 000000000..a9627a0d3 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c @@ -0,0 +1,42 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg3e16_v_bf16mf4x3(__bf16 *rs1, vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16mf4x3(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf2x3(__bf16 *rs1, vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16mf2x3(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m1x3(__bf16 *rs1, vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16m1x3(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m2x3(__bf16 *rs1, vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16m2x3(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16mf4x3_m(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16_v_bf16mf2x3_m(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x3_t vs3, + size_t vl) { + return __riscv_vsseg3e16_v_bf16m1x3_m(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x3_t vs3, + size_t vl) { + return __riscv_vsseg3e16_v_bf16m2x3_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c new file mode 100644 index 000000000..9f808d494 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c @@ -0,0 +1,42 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg4e16_v_bf16mf4x4(__bf16 *rs1, vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16mf4x4(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf2x4(__bf16 *rs1, vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16mf2x4(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m1x4(__bf16 *rs1, vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16m1x4(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m2x4(__bf16 *rs1, vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16m2x4(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16mf4x4_m(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16_v_bf16mf2x4_m(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x4_t vs3, + size_t vl) { + return __riscv_vsseg4e16_v_bf16m1x4_m(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x4_t vs3, + size_t vl) { + return __riscv_vsseg4e16_v_bf16m2x4_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c new file mode 100644 index 000000000..920af0849 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg5e16_v_bf16mf4x5(__bf16 *rs1, vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16mf4x5(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf2x5(__bf16 *rs1, vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16mf2x5(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16m1x5(__bf16 *rs1, vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16m1x5(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16mf4x5_m(vm, rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsseg5e16_v_bf16mf2x5_m(vm, rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x5_t vs3, + size_t vl) { + return __riscv_vsseg5e16_v_bf16m1x5_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c new file mode 100644 index 000000000..6d1b46b04 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg6e16_v_bf16mf4x6(__bf16 *rs1, vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16mf4x6(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf2x6(__bf16 *rs1, vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16mf2x6(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16m1x6(__bf16 *rs1, vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16m1x6(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16mf4x6_m(vm, rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsseg6e16_v_bf16mf2x6_m(vm, rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x6_t vs3, + size_t vl) { + return __riscv_vsseg6e16_v_bf16m1x6_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c new file mode 100644 index 000000000..6dbc90c56 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg7e16_v_bf16mf4x7(__bf16 *rs1, vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16mf4x7(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf2x7(__bf16 *rs1, vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16mf2x7(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16m1x7(__bf16 *rs1, vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16m1x7(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16mf4x7_m(vm, rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsseg7e16_v_bf16mf2x7_m(vm, rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x7_t vs3, + size_t vl) { + return __riscv_vsseg7e16_v_bf16m1x7_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c new file mode 100644 index 000000000..0169db97f --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg8e16_v_bf16mf4x8(__bf16 *rs1, vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16mf4x8(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf2x8(__bf16 *rs1, vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16mf2x8(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16m1x8(__bf16 *rs1, vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16m1x8(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16mf4x8_m(vm, rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsseg8e16_v_bf16mf2x8_m(vm, rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x8_t vs3, + size_t vl) { + return __riscv_vsseg8e16_v_bf16m1x8_m(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c new file mode 100644 index 000000000..1af94e9a1 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c @@ -0,0 +1,56 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg2e16_v_bf16mf4x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16mf4x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf2x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16mf2x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m1x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16_v_bf16m1x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m2x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16_v_bf16m2x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m4x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16_v_bf16m4x2(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16mf4x2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16mf2x2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16m1x2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16m2x2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16_v_bf16m4x2_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c new file mode 100644 index 000000000..4a3efc6d7 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c @@ -0,0 +1,46 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg3e16_v_bf16mf4x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16mf4x3(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf2x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16mf2x3(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m1x3(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x3_t vs3, + size_t vl) { + return __riscv_vssseg3e16_v_bf16m1x3(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m2x3(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x3_t vs3, + size_t vl) { + return __riscv_vssseg3e16_v_bf16m2x3(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16mf4x3_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16mf2x3_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16m1x3_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16_v_bf16m2x3_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c new file mode 100644 index 000000000..34b822db7 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c @@ -0,0 +1,46 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg4e16_v_bf16mf4x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16mf4x4(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf2x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16mf2x4(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m1x4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x4_t vs3, + size_t vl) { + return __riscv_vssseg4e16_v_bf16m1x4(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m2x4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x4_t vs3, + size_t vl) { + return __riscv_vssseg4e16_v_bf16m2x4(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16mf4x4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16mf2x4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16m1x4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16_v_bf16m2x4_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c new file mode 100644 index 000000000..a4b10760f --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg5e16_v_bf16mf4x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16mf4x5(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf2x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16mf2x5(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16m1x5(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x5_t vs3, + size_t vl) { + return __riscv_vssseg5e16_v_bf16m1x5(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16mf4x5_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16mf2x5_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vssseg5e16_v_bf16m1x5_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c new file mode 100644 index 000000000..ccb7fd991 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg6e16_v_bf16mf4x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16mf4x6(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf2x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16mf2x6(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16m1x6(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x6_t vs3, + size_t vl) { + return __riscv_vssseg6e16_v_bf16m1x6(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16mf4x6_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16mf2x6_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vssseg6e16_v_bf16m1x6_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c new file mode 100644 index 000000000..e8ca20934 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg7e16_v_bf16mf4x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16mf4x7(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf2x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16mf2x7(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16m1x7(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x7_t vs3, + size_t vl) { + return __riscv_vssseg7e16_v_bf16m1x7(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16mf4x7_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16mf2x7_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vssseg7e16_v_bf16m1x7_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c new file mode 100644 index 000000000..f8b1755b8 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg8e16_v_bf16mf4x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16mf4x8(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf2x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16mf2x8(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16m1x8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x8_t vs3, + size_t vl) { + return __riscv_vssseg8e16_v_bf16m1x8(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16mf4x8_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16mf2x8_m(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vssseg8e16_v_bf16m1x8_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c new file mode 100644 index 000000000..eaed69275 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16mf4(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16mf2(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16m1(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16m2(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16m4(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsuxei16_v_bf16m8(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16mf4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16mf2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16m1_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16m2_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16m4_m(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsuxei16_v_bf16m8_m(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c new file mode 100644 index 000000000..7f251c5a1 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16mf4x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16mf2x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m1x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m2x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m4x2(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16mf4x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16mf2x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m1x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m2x2_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16_v_bf16m4x2_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c new file mode 100644 index 000000000..e18b7ae84 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16mf4x3(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16mf2x3(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16m1x3(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16m2x3(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16mf4x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16mf2x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16m1x3_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16_v_bf16m2x3_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c new file mode 100644 index 000000000..19381a4ab --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16mf4x4(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16mf2x4(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16m1x4(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16m2x4(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16mf4x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16mf2x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16m1x4_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16_v_bf16m2x4_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c new file mode 100644 index 000000000..47c57f8cf --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16mf4x5(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16mf2x5(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16m1x5(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16mf4x5_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16mf2x5_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16_v_bf16m1x5_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c new file mode 100644 index 000000000..4b627226e --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16mf4x6(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16mf2x6(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16m1x6(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16mf4x6_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16mf2x6_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16_v_bf16m1x6_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c new file mode 100644 index 000000000..e4c378f52 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16mf4x7(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16mf2x7(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16m1x7(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16mf4x7_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16mf2x7_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16_v_bf16m1x7_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c new file mode 100644 index 000000000..bff55d0bc --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16mf4x8(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16mf2x8(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16m1x8(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16mf4x8_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16mf2x8_m(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16_v_bf16m1x8_m(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vundefined.c b/auto-generated/bfloat16/llvm-api-tests/vundefined.c new file mode 100644 index 000000000..317226a6f --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vundefined.c @@ -0,0 +1,123 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vundefined_bf16mf4() { + return __riscv_vundefined_bf16mf4(); +} + +vbfloat16mf2_t test_vundefined_bf16mf2() { + return __riscv_vundefined_bf16mf2(); +} + +vbfloat16m1_t test_vundefined_bf16m1() { return __riscv_vundefined_bf16m1(); } + +vbfloat16m2_t test_vundefined_bf16m2() { return __riscv_vundefined_bf16m2(); } + +vbfloat16m4_t test_vundefined_bf16m4() { return __riscv_vundefined_bf16m4(); } + +vbfloat16m8_t test_vundefined_bf16m8() { return __riscv_vundefined_bf16m8(); } + +vbfloat16mf4x2_t test_vundefined_bf16mf4x2() { + return __riscv_vundefined_bf16mf4x2(); +} + +vbfloat16mf4x3_t test_vundefined_bf16mf4x3() { + return __riscv_vundefined_bf16mf4x3(); +} + +vbfloat16mf4x4_t test_vundefined_bf16mf4x4() { + return __riscv_vundefined_bf16mf4x4(); +} + +vbfloat16mf4x5_t test_vundefined_bf16mf4x5() { + return __riscv_vundefined_bf16mf4x5(); +} + +vbfloat16mf4x6_t test_vundefined_bf16mf4x6() { + return __riscv_vundefined_bf16mf4x6(); +} + +vbfloat16mf4x7_t test_vundefined_bf16mf4x7() { + return __riscv_vundefined_bf16mf4x7(); +} + +vbfloat16mf4x8_t test_vundefined_bf16mf4x8() { + return __riscv_vundefined_bf16mf4x8(); +} + +vbfloat16mf2x2_t test_vundefined_bf16mf2x2() { + return __riscv_vundefined_bf16mf2x2(); +} + +vbfloat16mf2x3_t test_vundefined_bf16mf2x3() { + return __riscv_vundefined_bf16mf2x3(); +} + +vbfloat16mf2x4_t test_vundefined_bf16mf2x4() { + return __riscv_vundefined_bf16mf2x4(); +} + +vbfloat16mf2x5_t test_vundefined_bf16mf2x5() { + return __riscv_vundefined_bf16mf2x5(); +} + +vbfloat16mf2x6_t test_vundefined_bf16mf2x6() { + return __riscv_vundefined_bf16mf2x6(); +} + +vbfloat16mf2x7_t test_vundefined_bf16mf2x7() { + return __riscv_vundefined_bf16mf2x7(); +} + +vbfloat16mf2x8_t test_vundefined_bf16mf2x8() { + return __riscv_vundefined_bf16mf2x8(); +} + +vbfloat16m1x2_t test_vundefined_bf16m1x2() { + return __riscv_vundefined_bf16m1x2(); +} + +vbfloat16m1x3_t test_vundefined_bf16m1x3() { + return __riscv_vundefined_bf16m1x3(); +} + +vbfloat16m1x4_t test_vundefined_bf16m1x4() { + return __riscv_vundefined_bf16m1x4(); +} + +vbfloat16m1x5_t test_vundefined_bf16m1x5() { + return __riscv_vundefined_bf16m1x5(); +} + +vbfloat16m1x6_t test_vundefined_bf16m1x6() { + return __riscv_vundefined_bf16m1x6(); +} + +vbfloat16m1x7_t test_vundefined_bf16m1x7() { + return __riscv_vundefined_bf16m1x7(); +} + +vbfloat16m1x8_t test_vundefined_bf16m1x8() { + return __riscv_vundefined_bf16m1x8(); +} + +vbfloat16m2x2_t test_vundefined_bf16m2x2() { + return __riscv_vundefined_bf16m2x2(); +} + +vbfloat16m2x3_t test_vundefined_bf16m2x3() { + return __riscv_vundefined_bf16m2x3(); +} + +vbfloat16m2x4_t test_vundefined_bf16m2x4() { + return __riscv_vundefined_bf16m2x4(); +} + +vbfloat16m4x2_t test_vundefined_bf16m4x2() { + return __riscv_vundefined_bf16m4x2(); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vget.c b/auto-generated/bfloat16/llvm-overloaded-tests/vget.c new file mode 100644 index 000000000..7e40b6803 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vget.c @@ -0,0 +1,145 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16m1_t test_vget_v_bf16m2_bf16m1(vbfloat16m2_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m4_bf16m1(vbfloat16m4_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m8_bf16m1(vbfloat16m8_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m4_bf16m2(vbfloat16m4_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m8_bf16m2(vbfloat16m8_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m4_t test_vget_v_bf16m8_bf16m4(vbfloat16m8_t src, size_t index) { + return __riscv_vget_bf16m4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x2_bf16mf4(vbfloat16mf4x2_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x3_bf16mf4(vbfloat16mf4x3_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x4_bf16mf4(vbfloat16mf4x4_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x5_bf16mf4(vbfloat16mf4x5_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x6_bf16mf4(vbfloat16mf4x6_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x7_bf16mf4(vbfloat16mf4x7_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x8_bf16mf4(vbfloat16mf4x8_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x2_bf16mf2(vbfloat16mf2x2_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x3_bf16mf2(vbfloat16mf2x3_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x4_bf16mf2(vbfloat16mf2x4_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x5_bf16mf2(vbfloat16mf2x5_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x6_bf16mf2(vbfloat16mf2x6_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x7_bf16mf2(vbfloat16mf2x7_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x8_bf16mf2(vbfloat16mf2x8_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x2_bf16m1(vbfloat16m1x2_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x3_bf16m1(vbfloat16m1x3_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x4_bf16m1(vbfloat16m1x4_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x5_bf16m1(vbfloat16m1x5_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x6_bf16m1(vbfloat16m1x6_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x7_bf16m1(vbfloat16m1x7_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x8_bf16m1(vbfloat16m1x8_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x2_bf16m2(vbfloat16m2x2_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x3_bf16m2(vbfloat16m2x3_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x4_bf16m2(vbfloat16m2x4_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m4_t test_vget_v_bf16m4x2_bf16m4(vbfloat16m4x2_t src, size_t index) { + return __riscv_vget_bf16m4(src, 0); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c new file mode 100644 index 000000000..44216082e --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c @@ -0,0 +1,34 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vle16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c new file mode 100644 index 000000000..2a31a4bdf --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c @@ -0,0 +1,37 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c new file mode 100644 index 000000000..311acc90b --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf2_t test_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16m1(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16m8(value); +} + +vbfloat16m1_t test_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_b16m1(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_b16m8(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value) { + return __riscv_vlmul_ext_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value) { + return __riscv_vlmul_ext_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value) { + return __riscv_vlmul_ext_b16m8(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value) { + return __riscv_vlmul_ext_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value) { + return __riscv_vlmul_ext_b16m8(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value) { + return __riscv_vlmul_ext_b16m8(value); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c new file mode 100644 index 000000000..6965aa520 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_b16mf2(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_b16m1(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_b16m1(value); +} + +vbfloat16m2_t test_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_b16m2(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16m1(value); +} + +vbfloat16m2_t test_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16m2(value); +} + +vbfloat16m4_t test_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16m4(value); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c new file mode 100644 index 000000000..a45e15a72 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vloxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c new file mode 100644 index 000000000..a7e9fe153 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c new file mode 100644 index 000000000..a1ffafeeb --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c new file mode 100644 index 000000000..6aa3b8b8b --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c new file mode 100644 index 000000000..85f3d7cfb --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c new file mode 100644 index 000000000..58b6d16de --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c new file mode 100644 index 000000000..c08ab2e27 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c new file mode 100644 index 000000000..b0b7671a0 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c new file mode 100644 index 000000000..81cd36f7c --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vlse16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c new file mode 100644 index 000000000..153ae20e8 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c @@ -0,0 +1,31 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c new file mode 100644 index 000000000..4e7d8b491 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c @@ -0,0 +1,32 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c new file mode 100644 index 000000000..9da26c4e7 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c @@ -0,0 +1,26 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16(vm, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16(vm, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16(vm, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c new file mode 100644 index 000000000..edc0f10a7 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c new file mode 100644 index 000000000..59cd0aa8c --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c @@ -0,0 +1,26 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16(vm, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16(vm, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16(vm, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c new file mode 100644 index 000000000..de5a83c0f --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c new file mode 100644 index 000000000..633901209 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c @@ -0,0 +1,21 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16(vm, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16(vm, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c new file mode 100644 index 000000000..34c4f9a64 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c @@ -0,0 +1,22 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c new file mode 100644 index 000000000..a9bd7d4d7 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c @@ -0,0 +1,21 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16(vm, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16(vm, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c new file mode 100644 index 000000000..8692c352c --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c @@ -0,0 +1,22 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c new file mode 100644 index 000000000..2d530f29c --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c @@ -0,0 +1,21 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16(vm, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16(vm, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c new file mode 100644 index 000000000..eb6f7209e --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c @@ -0,0 +1,22 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c new file mode 100644 index 000000000..eb8cf5abc --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c @@ -0,0 +1,21 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16(vm, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16(vm, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c new file mode 100644 index 000000000..4fb0315c2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c @@ -0,0 +1,22 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c new file mode 100644 index 000000000..c72674659 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c @@ -0,0 +1,31 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c new file mode 100644 index 000000000..23835bac4 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c @@ -0,0 +1,26 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c new file mode 100644 index 000000000..34a27b713 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c @@ -0,0 +1,26 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c new file mode 100644 index 000000000..1210a75cb --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c @@ -0,0 +1,21 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c new file mode 100644 index 000000000..5a6eab3b3 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c @@ -0,0 +1,21 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c new file mode 100644 index 000000000..55bb1f469 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c @@ -0,0 +1,21 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c new file mode 100644 index 000000000..8a570e3ba --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c @@ -0,0 +1,21 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c new file mode 100644 index 000000000..93b88bfa9 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vluxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c new file mode 100644 index 000000000..85550720b --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c new file mode 100644 index 000000000..817a9422e --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c new file mode 100644 index 000000000..2b7f3ec0e --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c new file mode 100644 index 000000000..5eb3f2650 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c new file mode 100644 index 000000000..60faacb65 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c new file mode 100644 index 000000000..37aaab710 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c new file mode 100644 index 000000000..3004d79da --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c new file mode 100644 index 000000000..1ea482fca --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c @@ -0,0 +1,103 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vreinterpret_v_i16mf4_bf16mf4(vint16mf4_t src) { + return __riscv_vreinterpret_bf16mf4(src); +} + +vbfloat16mf2_t test_vreinterpret_v_i16mf2_bf16mf2(vint16mf2_t src) { + return __riscv_vreinterpret_bf16mf2(src); +} + +vbfloat16m1_t test_vreinterpret_v_i16m1_bf16m1(vint16m1_t src) { + return __riscv_vreinterpret_bf16m1(src); +} + +vbfloat16m2_t test_vreinterpret_v_i16m2_bf16m2(vint16m2_t src) { + return __riscv_vreinterpret_bf16m2(src); +} + +vbfloat16m4_t test_vreinterpret_v_i16m4_bf16m4(vint16m4_t src) { + return __riscv_vreinterpret_bf16m4(src); +} + +vbfloat16m8_t test_vreinterpret_v_i16m8_bf16m8(vint16m8_t src) { + return __riscv_vreinterpret_bf16m8(src); +} + +vbfloat16mf4_t test_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src) { + return __riscv_vreinterpret_bf16mf4(src); +} + +vbfloat16mf2_t test_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src) { + return __riscv_vreinterpret_bf16mf2(src); +} + +vbfloat16m1_t test_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src) { + return __riscv_vreinterpret_bf16m1(src); +} + +vbfloat16m2_t test_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src) { + return __riscv_vreinterpret_bf16m2(src); +} + +vbfloat16m4_t test_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src) { + return __riscv_vreinterpret_bf16m4(src); +} + +vbfloat16m8_t test_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src) { + return __riscv_vreinterpret_bf16m8(src); +} + +vint16mf4_t test_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_i16mf4(src); +} + +vint16mf2_t test_vreinterpret_v_bf16mf2_i16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_i16mf2(src); +} + +vint16m1_t test_vreinterpret_v_bf16m1_i16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_i16m1(src); +} + +vint16m2_t test_vreinterpret_v_bf16m2_i16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_i16m2(src); +} + +vint16m4_t test_vreinterpret_v_bf16m4_i16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_i16m4(src); +} + +vint16m8_t test_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_i16m8(src); +} + +vuint16mf4_t test_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_ui16mf4(src); +} + +vuint16mf2_t test_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_ui16mf2(src); +} + +vuint16m1_t test_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_ui16m1(src); +} + +vuint16m2_t test_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_ui16m2(src); +} + +vuint16m4_t test_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_ui16m4(src); +} + +vuint16m8_t test_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_ui16m8(src); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c new file mode 100644 index 000000000..1a06e8510 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c @@ -0,0 +1,60 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vse16_v_bf16mf4(__bf16 *rs1, vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16mf2(__bf16 *rs1, vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16m1(__bf16 *rs1, vbfloat16m1_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16m2(__bf16 *rs1, vbfloat16m2_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16m4(__bf16 *rs1, vbfloat16m4_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16m8(__bf16 *rs1, vbfloat16m8_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vset.c b/auto-generated/bfloat16/llvm-overloaded-tests/vset.c new file mode 100644 index 000000000..6bedaa3dc --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vset.c @@ -0,0 +1,175 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16m2_t test_vset_v_bf16m1_bf16m2(vbfloat16m2_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m4_t test_vset_v_bf16m1_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m4_t test_vset_v_bf16m2_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m1_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m2_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m4_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x2_t test_vset_v_bf16mf4_bf16mf4x2(vbfloat16mf4x2_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x3_t test_vset_v_bf16mf4_bf16mf4x3(vbfloat16mf4x3_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x4_t test_vset_v_bf16mf4_bf16mf4x4(vbfloat16mf4x4_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x5_t test_vset_v_bf16mf4_bf16mf4x5(vbfloat16mf4x5_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x6_t test_vset_v_bf16mf4_bf16mf4x6(vbfloat16mf4x6_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x7_t test_vset_v_bf16mf4_bf16mf4x7(vbfloat16mf4x7_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x8_t test_vset_v_bf16mf4_bf16mf4x8(vbfloat16mf4x8_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x2_t test_vset_v_bf16mf2_bf16mf2x2(vbfloat16mf2x2_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x3_t test_vset_v_bf16mf2_bf16mf2x3(vbfloat16mf2x3_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x4_t test_vset_v_bf16mf2_bf16mf2x4(vbfloat16mf2x4_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x5_t test_vset_v_bf16mf2_bf16mf2x5(vbfloat16mf2x5_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x6_t test_vset_v_bf16mf2_bf16mf2x6(vbfloat16mf2x6_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x7_t test_vset_v_bf16mf2_bf16mf2x7(vbfloat16mf2x7_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x8_t test_vset_v_bf16mf2_bf16mf2x8(vbfloat16mf2x8_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x2_t test_vset_v_bf16m1_bf16m1x2(vbfloat16m1x2_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x3_t test_vset_v_bf16m1_bf16m1x3(vbfloat16m1x3_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x4_t test_vset_v_bf16m1_bf16m1x4(vbfloat16m1x4_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x5_t test_vset_v_bf16m1_bf16m1x5(vbfloat16m1x5_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x6_t test_vset_v_bf16m1_bf16m1x6(vbfloat16m1x6_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x7_t test_vset_v_bf16m1_bf16m1x7(vbfloat16m1x7_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x8_t test_vset_v_bf16m1_bf16m1x8(vbfloat16m1x8_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m2x2_t test_vset_v_bf16m2_bf16m2x2(vbfloat16m2x2_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m2x3_t test_vset_v_bf16m2_bf16m2x3(vbfloat16m2x3_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m2x4_t test_vset_v_bf16m2_bf16m2x4(vbfloat16m2x4_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m4x2_t test_vset_v_bf16m4_bf16m4x2(vbfloat16m4x2_t dest, size_t index, + vbfloat16m4_t value) { + return __riscv_vset(dest, 0, value); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c new file mode 100644 index 000000000..b4acd8965 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c new file mode 100644 index 000000000..033cfa2b3 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c new file mode 100644 index 000000000..7d172c80e --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl) { + return __riscv_vsoxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl) { + return __riscv_vsoxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c new file mode 100644 index 000000000..4067814b2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl) { + return __riscv_vsoxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl) { + return __riscv_vsoxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c new file mode 100644 index 000000000..f8d0e1fe2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl) { + return __riscv_vsoxseg5ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl) { + return __riscv_vsoxseg5ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c new file mode 100644 index 000000000..e6e8650f8 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl) { + return __riscv_vsoxseg6ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl) { + return __riscv_vsoxseg6ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c new file mode 100644 index 000000000..d79d49e70 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl) { + return __riscv_vsoxseg7ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl) { + return __riscv_vsoxseg7ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c new file mode 100644 index 000000000..4bd5455bb --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsoxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl) { + return __riscv_vsoxseg8ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl) { + return __riscv_vsoxseg8ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c new file mode 100644 index 000000000..9c9fde087 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsse16_v_bf16mf4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m1(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c new file mode 100644 index 000000000..0ddd0c89a --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c @@ -0,0 +1,51 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg2e16_v_bf16mf4x2(__bf16 *rs1, vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf2x2(__bf16 *rs1, vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m1x2(__bf16 *rs1, vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m2x2(__bf16 *rs1, vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m4x2(__bf16 *rs1, vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c new file mode 100644 index 000000000..095aefebc --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c @@ -0,0 +1,42 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg3e16_v_bf16mf4x3(__bf16 *rs1, vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf2x3(__bf16 *rs1, vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m1x3(__bf16 *rs1, vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m2x3(__bf16 *rs1, vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x3_t vs3, + size_t vl) { + return __riscv_vsseg3e16(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x3_t vs3, + size_t vl) { + return __riscv_vsseg3e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c new file mode 100644 index 000000000..f1f219558 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c @@ -0,0 +1,42 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg4e16_v_bf16mf4x4(__bf16 *rs1, vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf2x4(__bf16 *rs1, vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m1x4(__bf16 *rs1, vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m2x4(__bf16 *rs1, vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x4_t vs3, + size_t vl) { + return __riscv_vsseg4e16(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x4_t vs3, + size_t vl) { + return __riscv_vsseg4e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c new file mode 100644 index 000000000..e419b9d35 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg5e16_v_bf16mf4x5(__bf16 *rs1, vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf2x5(__bf16 *rs1, vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16m1x5(__bf16 *rs1, vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(vm, rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(vm, rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x5_t vs3, + size_t vl) { + return __riscv_vsseg5e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c new file mode 100644 index 000000000..07bc65325 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg6e16_v_bf16mf4x6(__bf16 *rs1, vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf2x6(__bf16 *rs1, vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16m1x6(__bf16 *rs1, vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(vm, rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(vm, rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x6_t vs3, + size_t vl) { + return __riscv_vsseg6e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c new file mode 100644 index 000000000..9ed16e7b0 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg7e16_v_bf16mf4x7(__bf16 *rs1, vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf2x7(__bf16 *rs1, vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16m1x7(__bf16 *rs1, vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(vm, rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(vm, rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x7_t vs3, + size_t vl) { + return __riscv_vsseg7e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c new file mode 100644 index 000000000..c5e78e91e --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c @@ -0,0 +1,33 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsseg8e16_v_bf16mf4x8(__bf16 *rs1, vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf2x8(__bf16 *rs1, vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16m1x8(__bf16 *rs1, vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(vm, rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(vm, rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x8_t vs3, + size_t vl) { + return __riscv_vsseg8e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c new file mode 100644 index 000000000..4cf01c969 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c @@ -0,0 +1,56 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg2e16_v_bf16mf4x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf2x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m1x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m2x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m4x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c new file mode 100644 index 000000000..81c3084f5 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c @@ -0,0 +1,46 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg3e16_v_bf16mf4x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf2x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m1x3(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x3_t vs3, + size_t vl) { + return __riscv_vssseg3e16(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m2x3(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x3_t vs3, + size_t vl) { + return __riscv_vssseg3e16(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c new file mode 100644 index 000000000..93435cac2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c @@ -0,0 +1,46 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg4e16_v_bf16mf4x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf2x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m1x4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x4_t vs3, + size_t vl) { + return __riscv_vssseg4e16(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m2x4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x4_t vs3, + size_t vl) { + return __riscv_vssseg4e16(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c new file mode 100644 index 000000000..db8cabb41 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg5e16_v_bf16mf4x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf2x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16m1x5(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x5_t vs3, + size_t vl) { + return __riscv_vssseg5e16(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c new file mode 100644 index 000000000..8f695c281 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg6e16_v_bf16mf4x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf2x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16m1x6(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x6_t vs3, + size_t vl) { + return __riscv_vssseg6e16(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c new file mode 100644 index 000000000..3ca13b74a --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg7e16_v_bf16mf4x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf2x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16m1x7(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x7_t vs3, + size_t vl) { + return __riscv_vssseg7e16(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c new file mode 100644 index 000000000..148a9aac2 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c @@ -0,0 +1,36 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vssseg8e16_v_bf16mf4x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf2x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16m1x8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x8_t vs3, + size_t vl) { + return __riscv_vssseg8e16(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c new file mode 100644 index 000000000..1b912128e --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c @@ -0,0 +1,66 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c new file mode 100644 index 000000000..80932af68 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c @@ -0,0 +1,58 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c new file mode 100644 index 000000000..cd9bb773a --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl) { + return __riscv_vsuxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl) { + return __riscv_vsuxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c new file mode 100644 index 000000000..82e5f338e --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c @@ -0,0 +1,48 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl) { + return __riscv_vsuxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl) { + return __riscv_vsuxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c new file mode 100644 index 000000000..af47c04f4 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl) { + return __riscv_vsuxseg5ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl) { + return __riscv_vsuxseg5ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c new file mode 100644 index 000000000..4a6bf7b1c --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl) { + return __riscv_vsuxseg6ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl) { + return __riscv_vsuxseg6ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c new file mode 100644 index 000000000..623b13686 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl) { + return __riscv_vsuxseg7ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl) { + return __riscv_vsuxseg7ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c new file mode 100644 index 000000000..80cd13e64 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +void test_vsuxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl) { + return __riscv_vsuxseg8ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl) { + return __riscv_vsuxseg8ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vget.c b/auto-generated/bfloat16/overloaded-api-testing/vget.c new file mode 100644 index 000000000..f249b3faf --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vget.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16m1_t test_vget_v_bf16m2_bf16m1(vbfloat16m2_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m4_bf16m1(vbfloat16m4_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m8_bf16m1(vbfloat16m8_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m4_bf16m2(vbfloat16m4_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m8_bf16m2(vbfloat16m8_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m4_t test_vget_v_bf16m8_bf16m4(vbfloat16m8_t src, size_t index) { + return __riscv_vget_bf16m4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x2_bf16mf4(vbfloat16mf4x2_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x3_bf16mf4(vbfloat16mf4x3_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x4_bf16mf4(vbfloat16mf4x4_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x5_bf16mf4(vbfloat16mf4x5_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x6_bf16mf4(vbfloat16mf4x6_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x7_bf16mf4(vbfloat16mf4x7_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf4_t test_vget_v_bf16mf4x8_bf16mf4(vbfloat16mf4x8_t src, + size_t index) { + return __riscv_vget_bf16mf4(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x2_bf16mf2(vbfloat16mf2x2_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x3_bf16mf2(vbfloat16mf2x3_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x4_bf16mf2(vbfloat16mf2x4_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x5_bf16mf2(vbfloat16mf2x5_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x6_bf16mf2(vbfloat16mf2x6_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x7_bf16mf2(vbfloat16mf2x7_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16mf2_t test_vget_v_bf16mf2x8_bf16mf2(vbfloat16mf2x8_t src, + size_t index) { + return __riscv_vget_bf16mf2(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x2_bf16m1(vbfloat16m1x2_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x3_bf16m1(vbfloat16m1x3_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x4_bf16m1(vbfloat16m1x4_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x5_bf16m1(vbfloat16m1x5_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x6_bf16m1(vbfloat16m1x6_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x7_bf16m1(vbfloat16m1x7_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m1_t test_vget_v_bf16m1x8_bf16m1(vbfloat16m1x8_t src, size_t index) { + return __riscv_vget_bf16m1(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x2_bf16m2(vbfloat16m2x2_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x3_bf16m2(vbfloat16m2x3_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m2_t test_vget_v_bf16m2x4_bf16m2(vbfloat16m2x4_t src, size_t index) { + return __riscv_vget_bf16m2(src, 0); +} + +vbfloat16m4_t test_vget_v_bf16m4x2_bf16m4(vbfloat16m4x2_t src, size_t index) { + return __riscv_vget_bf16m4(src, 0); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vle16.c b/auto-generated/bfloat16/overloaded-api-testing/vle16.c new file mode 100644 index 000000000..2e0ef5c7c --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vle16.c @@ -0,0 +1,29 @@ +#include +#include + +vbfloat16mf4_t test_vle16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, size_t vl) { + return __riscv_vle16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vle16ff.c b/auto-generated/bfloat16/overloaded-api-testing/vle16ff.c new file mode 100644 index 000000000..34da33989 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vle16ff.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlmul_ext_v.c b/auto-generated/bfloat16/overloaded-api-testing/vlmul_ext_v.c new file mode 100644 index 000000000..bd60827ff --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlmul_ext_v.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf2_t test_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16m1(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_b16m8(value); +} + +vbfloat16m1_t test_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_b16m1(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_b16m8(value); +} + +vbfloat16m2_t test_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value) { + return __riscv_vlmul_ext_b16m2(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value) { + return __riscv_vlmul_ext_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value) { + return __riscv_vlmul_ext_b16m8(value); +} + +vbfloat16m4_t test_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value) { + return __riscv_vlmul_ext_b16m4(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value) { + return __riscv_vlmul_ext_b16m8(value); +} + +vbfloat16m8_t test_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value) { + return __riscv_vlmul_ext_b16m8(value); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlmul_trunc_v.c b/auto-generated/bfloat16/overloaded-api-testing/vlmul_trunc_v.c new file mode 100644 index 000000000..08791bc2a --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlmul_trunc_v.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf4_t test_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_b16mf2(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_b16m1(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_b16m1(value); +} + +vbfloat16m2_t test_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_b16m2(value); +} + +vbfloat16mf4_t test_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16mf4(value); +} + +vbfloat16mf2_t test_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16mf2(value); +} + +vbfloat16m1_t test_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16m1(value); +} + +vbfloat16m2_t test_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16m2(value); +} + +vbfloat16m4_t test_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_b16m4(value); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vloxei16.c b/auto-generated/bfloat16/overloaded-api-testing/vloxei16.c new file mode 100644 index 000000000..a32bd147c --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vloxei16.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf4_t test_vloxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vloxseg2ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vloxseg2ei16.c new file mode 100644 index 000000000..06999c6b0 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vloxseg2ei16.c @@ -0,0 +1,54 @@ +#include +#include + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vloxseg3ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vloxseg3ei16.c new file mode 100644 index 000000000..1534ddfde --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vloxseg3ei16.c @@ -0,0 +1,44 @@ +#include +#include + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vloxseg4ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vloxseg4ei16.c new file mode 100644 index 000000000..25543e43b --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vloxseg4ei16.c @@ -0,0 +1,44 @@ +#include +#include + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vloxseg5ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vloxseg5ei16.c new file mode 100644 index 000000000..cb842f8d3 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vloxseg5ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vloxseg6ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vloxseg6ei16.c new file mode 100644 index 000000000..866ca7f8c --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vloxseg6ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vloxseg7ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vloxseg7ei16.c new file mode 100644 index 000000000..788934129 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vloxseg7ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vloxseg8ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vloxseg8ei16.c new file mode 100644 index 000000000..001837f44 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vloxseg8ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlse16.c b/auto-generated/bfloat16/overloaded-api-testing/vlse16.c new file mode 100644 index 000000000..120ce69e7 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlse16.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4_t test_vlse16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg2e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg2e16.c new file mode 100644 index 000000000..4d3292d0a --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg2e16.c @@ -0,0 +1,27 @@ +#include +#include + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg2e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg2e16ff.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg2e16ff.c new file mode 100644 index 000000000..53ba3ba68 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg2e16ff.c @@ -0,0 +1,27 @@ +#include +#include + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg3e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg3e16.c new file mode 100644 index 000000000..a3cf2b4de --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg3e16.c @@ -0,0 +1,22 @@ +#include +#include + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16(vm, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16(vm, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16(vm, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg3e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg3e16ff.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg3e16ff.c new file mode 100644 index 000000000..c708c12bf --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg3e16ff.c @@ -0,0 +1,22 @@ +#include +#include + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg4e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg4e16.c new file mode 100644 index 000000000..4d0994c2a --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg4e16.c @@ -0,0 +1,22 @@ +#include +#include + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16(vm, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16(vm, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16(vm, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg4e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg4e16ff.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg4e16ff.c new file mode 100644 index 000000000..bdfb7c1ed --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg4e16ff.c @@ -0,0 +1,22 @@ +#include +#include + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg5e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg5e16.c new file mode 100644 index 000000000..0a8e634c5 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg5e16.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16(vm, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16(vm, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg5e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg5e16ff.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg5e16ff.c new file mode 100644 index 000000000..6e8e9d1f0 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg5e16ff.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg6e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg6e16.c new file mode 100644 index 000000000..9365aeb8d --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg6e16.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16(vm, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16(vm, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg6e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg6e16ff.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg6e16ff.c new file mode 100644 index 000000000..8376dd159 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg6e16ff.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg7e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg7e16.c new file mode 100644 index 000000000..01c7e48d8 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg7e16.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16(vm, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16(vm, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg7e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg7e16ff.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg7e16ff.c new file mode 100644 index 000000000..8db5ec0e2 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg7e16ff.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg8e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg8e16.c new file mode 100644 index 000000000..cc5804338 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg8e16.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16(vm, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16(vm, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t vl) { + return __riscv_vlseg8e16(vm, rs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlseg8e16ff.c b/auto-generated/bfloat16/overloaded-api-testing/vlseg8e16ff.c new file mode 100644 index 000000000..4011e172e --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlseg8e16ff.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff(vm, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff(vm, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlsseg2e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlsseg2e16.c new file mode 100644 index 000000000..53c30e9e0 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlsseg2e16.c @@ -0,0 +1,27 @@ +#include +#include + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlsseg3e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlsseg3e16.c new file mode 100644 index 000000000..b3f3213fa --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlsseg3e16.c @@ -0,0 +1,22 @@ +#include +#include + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlsseg4e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlsseg4e16.c new file mode 100644 index 000000000..b24623f0a --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlsseg4e16.c @@ -0,0 +1,22 @@ +#include +#include + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlsseg5e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlsseg5e16.c new file mode 100644 index 000000000..98e718b17 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlsseg5e16.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlsseg6e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlsseg6e16.c new file mode 100644 index 000000000..9b6a0f74a --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlsseg6e16.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlsseg7e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlsseg7e16.c new file mode 100644 index 000000000..4c25ff34d --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlsseg7e16.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlsseg8e16.c b/auto-generated/bfloat16/overloaded-api-testing/vlsseg8e16.c new file mode 100644 index 000000000..dde6175ca --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vlsseg8e16.c @@ -0,0 +1,17 @@ +#include +#include + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_m(vbool64_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_m(vbool32_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vluxei16.c b/auto-generated/bfloat16/overloaded-api-testing/vluxei16.c new file mode 100644 index 000000000..934f2d147 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vluxei16.c @@ -0,0 +1,62 @@ +#include +#include + +vbfloat16mf4_t test_vluxei16_v_bf16mf4(const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2(const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8(const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16(rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_m(vbool64_t vm, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_m(vbool32_t vm, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_m(vbool2_t vm, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vluxseg2ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vluxseg2ei16.c new file mode 100644 index 000000000..73f98c757 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vluxseg2ei16.c @@ -0,0 +1,54 @@ +#include +#include + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2(const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16(rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_m(vbool4_t vm, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vluxseg3ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vluxseg3ei16.c new file mode 100644 index 000000000..a63c93a80 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vluxseg3ei16.c @@ -0,0 +1,44 @@ +#include +#include + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16(rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16(rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vluxseg4ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vluxseg4ei16.c new file mode 100644 index 000000000..77ad1b7a3 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vluxseg4ei16.c @@ -0,0 +1,44 @@ +#include +#include + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16(rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4(const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16(rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(vm, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_m(vbool8_t vm, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vluxseg5ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vluxseg5ei16.c new file mode 100644 index 000000000..e125c0b5a --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vluxseg5ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16(rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vluxseg6ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vluxseg6ei16.c new file mode 100644 index 000000000..570414da9 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vluxseg6ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16(rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vluxseg7ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vluxseg7ei16.c new file mode 100644 index 000000000..ecf6bb4ee --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vluxseg7ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16(rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vluxseg8ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vluxseg8ei16.c new file mode 100644 index 000000000..bf428cc23 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vluxseg8ei16.c @@ -0,0 +1,34 @@ +#include +#include + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8(const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8(const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8(const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16(rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(vm, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(vm, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_m(vbool16_t vm, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16(vm, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vreinterpret.c b/auto-generated/bfloat16/overloaded-api-testing/vreinterpret.c new file mode 100644 index 000000000..61f031c7d --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vreinterpret.c @@ -0,0 +1,98 @@ +#include +#include + +vbfloat16mf4_t test_vreinterpret_v_i16mf4_bf16mf4(vint16mf4_t src) { + return __riscv_vreinterpret_bf16mf4(src); +} + +vbfloat16mf2_t test_vreinterpret_v_i16mf2_bf16mf2(vint16mf2_t src) { + return __riscv_vreinterpret_bf16mf2(src); +} + +vbfloat16m1_t test_vreinterpret_v_i16m1_bf16m1(vint16m1_t src) { + return __riscv_vreinterpret_bf16m1(src); +} + +vbfloat16m2_t test_vreinterpret_v_i16m2_bf16m2(vint16m2_t src) { + return __riscv_vreinterpret_bf16m2(src); +} + +vbfloat16m4_t test_vreinterpret_v_i16m4_bf16m4(vint16m4_t src) { + return __riscv_vreinterpret_bf16m4(src); +} + +vbfloat16m8_t test_vreinterpret_v_i16m8_bf16m8(vint16m8_t src) { + return __riscv_vreinterpret_bf16m8(src); +} + +vbfloat16mf4_t test_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src) { + return __riscv_vreinterpret_bf16mf4(src); +} + +vbfloat16mf2_t test_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src) { + return __riscv_vreinterpret_bf16mf2(src); +} + +vbfloat16m1_t test_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src) { + return __riscv_vreinterpret_bf16m1(src); +} + +vbfloat16m2_t test_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src) { + return __riscv_vreinterpret_bf16m2(src); +} + +vbfloat16m4_t test_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src) { + return __riscv_vreinterpret_bf16m4(src); +} + +vbfloat16m8_t test_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src) { + return __riscv_vreinterpret_bf16m8(src); +} + +vint16mf4_t test_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_i16mf4(src); +} + +vint16mf2_t test_vreinterpret_v_bf16mf2_i16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_i16mf2(src); +} + +vint16m1_t test_vreinterpret_v_bf16m1_i16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_i16m1(src); +} + +vint16m2_t test_vreinterpret_v_bf16m2_i16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_i16m2(src); +} + +vint16m4_t test_vreinterpret_v_bf16m4_i16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_i16m4(src); +} + +vint16m8_t test_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_i16m8(src); +} + +vuint16mf4_t test_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_ui16mf4(src); +} + +vuint16mf2_t test_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_ui16mf2(src); +} + +vuint16m1_t test_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_ui16m1(src); +} + +vuint16m2_t test_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_ui16m2(src); +} + +vuint16m4_t test_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_ui16m4(src); +} + +vuint16m8_t test_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_ui16m8(src); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vse16.c b/auto-generated/bfloat16/overloaded-api-testing/vse16.c new file mode 100644 index 000000000..923f74fd4 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vse16.c @@ -0,0 +1,56 @@ +#include +#include + +void test_vse16_v_bf16mf4(__bf16 *rs1, vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16mf2(__bf16 *rs1, vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16m1(__bf16 *rs1, vbfloat16m1_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16m2(__bf16 *rs1, vbfloat16m2_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16m4(__bf16 *rs1, vbfloat16m4_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16m8(__bf16 *rs1, vbfloat16m8_t vs3, size_t vl) { + return __riscv_vse16(rs1, vs3, vl); +} + +void test_vse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} + +void test_vse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vse16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vset.c b/auto-generated/bfloat16/overloaded-api-testing/vset.c new file mode 100644 index 000000000..93fe47cd1 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vset.c @@ -0,0 +1,171 @@ +#include +#include + +vbfloat16m2_t test_vset_v_bf16m1_bf16m2(vbfloat16m2_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m4_t test_vset_v_bf16m1_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m4_t test_vset_v_bf16m2_bf16m4(vbfloat16m4_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m1_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m2_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m8_t test_vset_v_bf16m4_bf16m8(vbfloat16m8_t dest, size_t index, + vbfloat16m4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x2_t test_vset_v_bf16mf4_bf16mf4x2(vbfloat16mf4x2_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x3_t test_vset_v_bf16mf4_bf16mf4x3(vbfloat16mf4x3_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x4_t test_vset_v_bf16mf4_bf16mf4x4(vbfloat16mf4x4_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x5_t test_vset_v_bf16mf4_bf16mf4x5(vbfloat16mf4x5_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x6_t test_vset_v_bf16mf4_bf16mf4x6(vbfloat16mf4x6_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x7_t test_vset_v_bf16mf4_bf16mf4x7(vbfloat16mf4x7_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf4x8_t test_vset_v_bf16mf4_bf16mf4x8(vbfloat16mf4x8_t dest, + size_t index, + vbfloat16mf4_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x2_t test_vset_v_bf16mf2_bf16mf2x2(vbfloat16mf2x2_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x3_t test_vset_v_bf16mf2_bf16mf2x3(vbfloat16mf2x3_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x4_t test_vset_v_bf16mf2_bf16mf2x4(vbfloat16mf2x4_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x5_t test_vset_v_bf16mf2_bf16mf2x5(vbfloat16mf2x5_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x6_t test_vset_v_bf16mf2_bf16mf2x6(vbfloat16mf2x6_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x7_t test_vset_v_bf16mf2_bf16mf2x7(vbfloat16mf2x7_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16mf2x8_t test_vset_v_bf16mf2_bf16mf2x8(vbfloat16mf2x8_t dest, + size_t index, + vbfloat16mf2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x2_t test_vset_v_bf16m1_bf16m1x2(vbfloat16m1x2_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x3_t test_vset_v_bf16m1_bf16m1x3(vbfloat16m1x3_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x4_t test_vset_v_bf16m1_bf16m1x4(vbfloat16m1x4_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x5_t test_vset_v_bf16m1_bf16m1x5(vbfloat16m1x5_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x6_t test_vset_v_bf16m1_bf16m1x6(vbfloat16m1x6_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x7_t test_vset_v_bf16m1_bf16m1x7(vbfloat16m1x7_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m1x8_t test_vset_v_bf16m1_bf16m1x8(vbfloat16m1x8_t dest, size_t index, + vbfloat16m1_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m2x2_t test_vset_v_bf16m2_bf16m2x2(vbfloat16m2x2_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m2x3_t test_vset_v_bf16m2_bf16m2x3(vbfloat16m2x3_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m2x4_t test_vset_v_bf16m2_bf16m2x4(vbfloat16m2x4_t dest, size_t index, + vbfloat16m2_t value) { + return __riscv_vset(dest, 0, value); +} + +vbfloat16m4x2_t test_vset_v_bf16m4_bf16m4x2(vbfloat16m4x2_t dest, size_t index, + vbfloat16m4_t value) { + return __riscv_vset(dest, 0, value); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsoxei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsoxei16.c new file mode 100644 index 000000000..c642d0d2c --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsoxei16.c @@ -0,0 +1,62 @@ +#include +#include + +void test_vsoxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsoxei16(rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsoxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsoxei16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsoxseg2ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg2ei16.c new file mode 100644 index 000000000..04116b6bc --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg2ei16.c @@ -0,0 +1,54 @@ +#include +#include + +void test_vsoxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsoxseg2ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsoxseg3ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg3ei16.c new file mode 100644 index 000000000..5b573578f --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg3ei16.c @@ -0,0 +1,44 @@ +#include +#include + +void test_vsoxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl) { + return __riscv_vsoxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl) { + return __riscv_vsoxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsoxseg3ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsoxseg4ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg4ei16.c new file mode 100644 index 000000000..f20ddb725 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg4ei16.c @@ -0,0 +1,44 @@ +#include +#include + +void test_vsoxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl) { + return __riscv_vsoxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl) { + return __riscv_vsoxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsoxseg4ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsoxseg5ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg5ei16.c new file mode 100644 index 000000000..b3bf5bf37 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg5ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsoxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl) { + return __riscv_vsoxseg5ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl) { + return __riscv_vsoxseg5ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsoxseg5ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsoxseg6ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg6ei16.c new file mode 100644 index 000000000..271ae1083 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg6ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsoxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl) { + return __riscv_vsoxseg6ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl) { + return __riscv_vsoxseg6ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsoxseg6ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsoxseg7ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg7ei16.c new file mode 100644 index 000000000..730c15d38 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg7ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsoxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl) { + return __riscv_vsoxseg7ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl) { + return __riscv_vsoxseg7ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsoxseg7ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsoxseg8ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg8ei16.c new file mode 100644 index 000000000..51bb463d6 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsoxseg8ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsoxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl) { + return __riscv_vsoxseg8ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl) { + return __riscv_vsoxseg8ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsoxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsoxseg8ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsse16.c b/auto-generated/bfloat16/overloaded-api-testing/vsse16.c new file mode 100644 index 000000000..e44dc2415 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsse16.c @@ -0,0 +1,62 @@ +#include +#include + +void test_vsse16_v_bf16mf4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m1(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsse16(rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} + +void test_vsse16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsse16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsseg2e16.c b/auto-generated/bfloat16/overloaded-api-testing/vsseg2e16.c new file mode 100644 index 000000000..ce09da7a2 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsseg2e16.c @@ -0,0 +1,47 @@ +#include +#include + +void test_vsseg2e16_v_bf16mf4x2(__bf16 *rs1, vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf2x2(__bf16 *rs1, vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m1x2(__bf16 *rs1, vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m2x2(__bf16 *rs1, vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m4x2(__bf16 *rs1, vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} + +void test_vsseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vbfloat16m4x2_t vs3, + size_t vl) { + return __riscv_vsseg2e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsseg3e16.c b/auto-generated/bfloat16/overloaded-api-testing/vsseg3e16.c new file mode 100644 index 000000000..066b28cf2 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsseg3e16.c @@ -0,0 +1,38 @@ +#include +#include + +void test_vsseg3e16_v_bf16mf4x3(__bf16 *rs1, vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf2x3(__bf16 *rs1, vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m1x3(__bf16 *rs1, vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m2x3(__bf16 *rs1, vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsseg3e16(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x3_t vs3, + size_t vl) { + return __riscv_vsseg3e16(vm, rs1, vs3, vl); +} + +void test_vsseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x3_t vs3, + size_t vl) { + return __riscv_vsseg3e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsseg4e16.c b/auto-generated/bfloat16/overloaded-api-testing/vsseg4e16.c new file mode 100644 index 000000000..c0ab986d3 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsseg4e16.c @@ -0,0 +1,38 @@ +#include +#include + +void test_vsseg4e16_v_bf16mf4x4(__bf16 *rs1, vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf2x4(__bf16 *rs1, vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m1x4(__bf16 *rs1, vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m2x4(__bf16 *rs1, vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsseg4e16(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x4_t vs3, + size_t vl) { + return __riscv_vsseg4e16(vm, rs1, vs3, vl); +} + +void test_vsseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vbfloat16m2x4_t vs3, + size_t vl) { + return __riscv_vsseg4e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsseg5e16.c b/auto-generated/bfloat16/overloaded-api-testing/vsseg5e16.c new file mode 100644 index 000000000..2c04b9b7d --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsseg5e16.c @@ -0,0 +1,29 @@ +#include +#include + +void test_vsseg5e16_v_bf16mf4x5(__bf16 *rs1, vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf2x5(__bf16 *rs1, vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16m1x5(__bf16 *rs1, vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(vm, rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsseg5e16(vm, rs1, vs3, vl); +} + +void test_vsseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x5_t vs3, + size_t vl) { + return __riscv_vsseg5e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsseg6e16.c b/auto-generated/bfloat16/overloaded-api-testing/vsseg6e16.c new file mode 100644 index 000000000..fe537164b --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsseg6e16.c @@ -0,0 +1,29 @@ +#include +#include + +void test_vsseg6e16_v_bf16mf4x6(__bf16 *rs1, vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf2x6(__bf16 *rs1, vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16m1x6(__bf16 *rs1, vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(vm, rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsseg6e16(vm, rs1, vs3, vl); +} + +void test_vsseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x6_t vs3, + size_t vl) { + return __riscv_vsseg6e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsseg7e16.c b/auto-generated/bfloat16/overloaded-api-testing/vsseg7e16.c new file mode 100644 index 000000000..36f79a388 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsseg7e16.c @@ -0,0 +1,29 @@ +#include +#include + +void test_vsseg7e16_v_bf16mf4x7(__bf16 *rs1, vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf2x7(__bf16 *rs1, vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16m1x7(__bf16 *rs1, vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(vm, rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsseg7e16(vm, rs1, vs3, vl); +} + +void test_vsseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x7_t vs3, + size_t vl) { + return __riscv_vsseg7e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsseg8e16.c b/auto-generated/bfloat16/overloaded-api-testing/vsseg8e16.c new file mode 100644 index 000000000..c6a631a0e --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsseg8e16.c @@ -0,0 +1,29 @@ +#include +#include + +void test_vsseg8e16_v_bf16mf4x8(__bf16 *rs1, vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf2x8(__bf16 *rs1, vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16m1x8(__bf16 *rs1, vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(vm, rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsseg8e16(vm, rs1, vs3, vl); +} + +void test_vsseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vbfloat16m1x8_t vs3, + size_t vl) { + return __riscv_vsseg8e16(vm, rs1, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vssseg2e16.c b/auto-generated/bfloat16/overloaded-api-testing/vssseg2e16.c new file mode 100644 index 000000000..28d76f0ae --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vssseg2e16.c @@ -0,0 +1,52 @@ +#include +#include + +void test_vssseg2e16_v_bf16mf4x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf2x2(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m1x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m2x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m4x2(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m4x2_t vs3, + size_t vl) { + return __riscv_vssseg2e16(rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg2e16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vssseg2e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vssseg3e16.c b/auto-generated/bfloat16/overloaded-api-testing/vssseg3e16.c new file mode 100644 index 000000000..445145245 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vssseg3e16.c @@ -0,0 +1,42 @@ +#include +#include + +void test_vssseg3e16_v_bf16mf4x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf2x3(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m1x3(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x3_t vs3, + size_t vl) { + return __riscv_vssseg3e16(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m2x3(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x3_t vs3, + size_t vl) { + return __riscv_vssseg3e16(rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg3e16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vssseg3e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vssseg4e16.c b/auto-generated/bfloat16/overloaded-api-testing/vssseg4e16.c new file mode 100644 index 000000000..98d2b433a --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vssseg4e16.c @@ -0,0 +1,42 @@ +#include +#include + +void test_vssseg4e16_v_bf16mf4x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf2x4(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m1x4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x4_t vs3, + size_t vl) { + return __riscv_vssseg4e16(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m2x4(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m2x4_t vs3, + size_t vl) { + return __riscv_vssseg4e16(rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg4e16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vssseg4e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vssseg5e16.c b/auto-generated/bfloat16/overloaded-api-testing/vssseg5e16.c new file mode 100644 index 000000000..d6f27bf5e --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vssseg5e16.c @@ -0,0 +1,32 @@ +#include +#include + +void test_vssseg5e16_v_bf16mf4x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf2x5(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16m1x5(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x5_t vs3, + size_t vl) { + return __riscv_vssseg5e16(rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg5e16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vssseg5e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vssseg6e16.c b/auto-generated/bfloat16/overloaded-api-testing/vssseg6e16.c new file mode 100644 index 000000000..ad952f272 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vssseg6e16.c @@ -0,0 +1,32 @@ +#include +#include + +void test_vssseg6e16_v_bf16mf4x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf2x6(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16m1x6(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x6_t vs3, + size_t vl) { + return __riscv_vssseg6e16(rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg6e16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vssseg6e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vssseg7e16.c b/auto-generated/bfloat16/overloaded-api-testing/vssseg7e16.c new file mode 100644 index 000000000..b84d2b9db --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vssseg7e16.c @@ -0,0 +1,32 @@ +#include +#include + +void test_vssseg7e16_v_bf16mf4x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf2x7(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16m1x7(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x7_t vs3, + size_t vl) { + return __riscv_vssseg7e16(rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg7e16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vssseg7e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vssseg8e16.c b/auto-generated/bfloat16/overloaded-api-testing/vssseg8e16.c new file mode 100644 index 000000000..195be7c8e --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vssseg8e16.c @@ -0,0 +1,32 @@ +#include +#include + +void test_vssseg8e16_v_bf16mf4x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf2x8(__bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16m1x8(__bf16 *rs1, ptrdiff_t rs2, vbfloat16m1x8_t vs3, + size_t vl) { + return __riscv_vssseg8e16(rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(vm, rs1, rs2, vs3, vl); +} + +void test_vssseg8e16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, ptrdiff_t rs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vssseg8e16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsuxei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsuxei16.c new file mode 100644 index 000000000..4236a87c0 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsuxei16.c @@ -0,0 +1,62 @@ +#include +#include + +void test_vsuxei16_v_bf16mf4(__bf16 *rs1, vuint16mf4_t rs2, vbfloat16mf4_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf2(__bf16 *rs1, vuint16mf2_t rs2, vbfloat16mf2_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m1(__bf16 *rs1, vuint16m1_t rs2, vbfloat16m1_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m2(__bf16 *rs1, vuint16m2_t rs2, vbfloat16m2_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m4(__bf16 *rs1, vuint16m4_t rs2, vbfloat16m4_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m8(__bf16 *rs1, vuint16m8_t rs2, vbfloat16m8_t vs3, + size_t vl) { + return __riscv_vsuxei16(rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf4_m(vbool64_t vm, __bf16 *rs1, vuint16mf4_t rs2, + vbfloat16mf4_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16mf2_m(vbool32_t vm, __bf16 *rs1, vuint16mf2_t rs2, + vbfloat16mf2_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m1_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t rs2, + vbfloat16m1_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t rs2, + vbfloat16m2_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m4_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t rs2, + vbfloat16m4_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} + +void test_vsuxei16_v_bf16m8_m(vbool2_t vm, __bf16 *rs1, vuint16m8_t rs2, + vbfloat16m8_t vs3, size_t vl) { + return __riscv_vsuxei16(vm, rs1, rs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsuxseg2ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg2ei16.c new file mode 100644 index 000000000..df05ac74e --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg2ei16.c @@ -0,0 +1,54 @@ +#include +#include + +void test_vsuxseg2ei16_v_bf16mf4x2(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf2x2(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m1x2(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m2x2(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m4x2(__bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf4x2_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x2_t vs3, + size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16mf2x2_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x2_t vs3, + size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m1x2_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m2x2_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, + vbfloat16m4x2_t vs3, size_t vl) { + return __riscv_vsuxseg2ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsuxseg3ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg3ei16.c new file mode 100644 index 000000000..6ca09eb44 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg3ei16.c @@ -0,0 +1,44 @@ +#include +#include + +void test_vsuxseg3ei16_v_bf16mf4x3(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf2x3(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m1x3(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m2x3(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf4x3_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x3_t vs3, + size_t vl) { + return __riscv_vsuxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16mf2x3_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x3_t vs3, + size_t vl) { + return __riscv_vsuxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m1x3_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg3ei16_v_bf16m2x3_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x3_t vs3, size_t vl) { + return __riscv_vsuxseg3ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsuxseg4ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg4ei16.c new file mode 100644 index 000000000..15d0841c3 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg4ei16.c @@ -0,0 +1,44 @@ +#include +#include + +void test_vsuxseg4ei16_v_bf16mf4x4(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf2x4(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m1x4(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m2x4(__bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf4x4_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x4_t vs3, + size_t vl) { + return __riscv_vsuxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16mf2x4_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x4_t vs3, + size_t vl) { + return __riscv_vsuxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m1x4_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg4ei16_v_bf16m2x4_m(vbool8_t vm, __bf16 *rs1, vuint16m2_t vs2, + vbfloat16m2x4_t vs3, size_t vl) { + return __riscv_vsuxseg4ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsuxseg5ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg5ei16.c new file mode 100644 index 000000000..7467e5cd1 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg5ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsuxseg5ei16_v_bf16mf4x5(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf2x5(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16m1x5(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf4x5_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x5_t vs3, + size_t vl) { + return __riscv_vsuxseg5ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16mf2x5_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x5_t vs3, + size_t vl) { + return __riscv_vsuxseg5ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg5ei16_v_bf16m1x5_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x5_t vs3, size_t vl) { + return __riscv_vsuxseg5ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsuxseg6ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg6ei16.c new file mode 100644 index 000000000..437c9778f --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg6ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsuxseg6ei16_v_bf16mf4x6(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf2x6(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16m1x6(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf4x6_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x6_t vs3, + size_t vl) { + return __riscv_vsuxseg6ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16mf2x6_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x6_t vs3, + size_t vl) { + return __riscv_vsuxseg6ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg6ei16_v_bf16m1x6_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x6_t vs3, size_t vl) { + return __riscv_vsuxseg6ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsuxseg7ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg7ei16.c new file mode 100644 index 000000000..7e86d2539 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg7ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsuxseg7ei16_v_bf16mf4x7(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf2x7(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16m1x7(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf4x7_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x7_t vs3, + size_t vl) { + return __riscv_vsuxseg7ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16mf2x7_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x7_t vs3, + size_t vl) { + return __riscv_vsuxseg7ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg7ei16_v_bf16m1x7_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x7_t vs3, size_t vl) { + return __riscv_vsuxseg7ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vsuxseg8ei16.c b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg8ei16.c new file mode 100644 index 000000000..eaaae3645 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vsuxseg8ei16.c @@ -0,0 +1,34 @@ +#include +#include + +void test_vsuxseg8ei16_v_bf16mf4x8(__bf16 *rs1, vuint16mf4_t vs2, + vbfloat16mf4x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf2x8(__bf16 *rs1, vuint16mf2_t vs2, + vbfloat16mf2x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16m1x8(__bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16(rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf4x8_m(vbool64_t vm, __bf16 *rs1, + vuint16mf4_t vs2, vbfloat16mf4x8_t vs3, + size_t vl) { + return __riscv_vsuxseg8ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16mf2x8_m(vbool32_t vm, __bf16 *rs1, + vuint16mf2_t vs2, vbfloat16mf2x8_t vs3, + size_t vl) { + return __riscv_vsuxseg8ei16(vm, rs1, vs2, vs3, vl); +} + +void test_vsuxseg8ei16_v_bf16m1x8_m(vbool16_t vm, __bf16 *rs1, vuint16m1_t vs2, + vbfloat16m1x8_t vs3, size_t vl) { + return __riscv_vsuxseg8ei16(vm, rs1, vs2, vs3, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vle16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vle16.c new file mode 100644 index 000000000..d147cdfec --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vle16.c @@ -0,0 +1,122 @@ +#include +#include + +vbfloat16mf4_t test_vle16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16mf4_tu(vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16mf2_tu(vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16m1_tu(vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16m2_tu(vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16m4_tu(vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_v_bf16m8_tu(vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2_tum(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1_tum(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_tum(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_tum(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_tumu(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2_mu(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1_mu(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_mu(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_mu(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vle16ff.c b/auto-generated/bfloat16/policy_funcs/api-testing/vle16ff.c new file mode 100644 index 000000000..200105f94 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vle16ff.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m1_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m8_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m1_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m8_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m1_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m8_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16mf2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m1_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_v_bf16m8_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vloxei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vloxei16.c new file mode 100644 index 000000000..75ab2d987 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vloxei16.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m1_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg2ei16.c new file mode 100644 index 000000000..0ed314d13 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg2ei16.c @@ -0,0 +1,139 @@ +#include +#include + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg3ei16.c new file mode 100644 index 000000000..7939b8fb1 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg3ei16.c @@ -0,0 +1,113 @@ +#include +#include + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg4ei16.c new file mode 100644 index 000000000..d0b103679 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg4ei16.c @@ -0,0 +1,113 @@ +#include +#include + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg5ei16.c new file mode 100644 index 000000000..3c915d4d9 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg5ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg6ei16.c new file mode 100644 index 000000000..55ab43069 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg6ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg7ei16.c new file mode 100644 index 000000000..c430c1e47 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg7ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg8ei16.c new file mode 100644 index 000000000..564807d33 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vloxseg8ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlse16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlse16.c new file mode 100644 index 000000000..ece6f8cae --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlse16.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m1_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg2e16.c new file mode 100644 index 000000000..f2fbf24f5 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg2e16.c @@ -0,0 +1,108 @@ +#include +#include + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_tu(vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_tu(vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_tu(vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_tu(vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_tu(vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_tum(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_mu(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg2e16ff.c new file mode 100644 index 000000000..da7df9f7f --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg2e16ff.c @@ -0,0 +1,132 @@ +#include +#include + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg3e16.c new file mode 100644 index 000000000..550192ec0 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg3e16.c @@ -0,0 +1,88 @@ +#include +#include + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_tu(vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_tu(vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_tu(vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_tu(vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg3e16ff.c new file mode 100644 index 000000000..6be408016 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg3e16ff.c @@ -0,0 +1,107 @@ +#include +#include + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg4e16.c new file mode 100644 index 000000000..ba875d221 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg4e16.c @@ -0,0 +1,88 @@ +#include +#include + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_tu(vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_tu(vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_tu(vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_tu(vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg4e16ff.c new file mode 100644 index 000000000..792e1be03 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg4e16ff.c @@ -0,0 +1,107 @@ +#include +#include + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg5e16.c new file mode 100644 index 000000000..37a4cdad6 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg5e16.c @@ -0,0 +1,68 @@ +#include +#include + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_tu(vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_tu(vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_tu(vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg5e16ff.c new file mode 100644 index 000000000..04d061397 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg5e16ff.c @@ -0,0 +1,82 @@ +#include +#include + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg6e16.c new file mode 100644 index 000000000..143635f11 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg6e16.c @@ -0,0 +1,68 @@ +#include +#include + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_tu(vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_tu(vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_tu(vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg6e16ff.c new file mode 100644 index 000000000..722c767fe --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg6e16ff.c @@ -0,0 +1,82 @@ +#include +#include + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg7e16.c new file mode 100644 index 000000000..cfc5711dd --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg7e16.c @@ -0,0 +1,68 @@ +#include +#include + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_tu(vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_tu(vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_tu(vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg7e16ff.c new file mode 100644 index 000000000..d53541c21 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg7e16ff.c @@ -0,0 +1,82 @@ +#include +#include + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg8e16.c new file mode 100644 index 000000000..3294997eb --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg8e16.c @@ -0,0 +1,68 @@ +#include +#include + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_tu(vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_tu(vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_tu(vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg8e16ff.c new file mode 100644 index 000000000..029dd6297 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlseg8e16ff.c @@ -0,0 +1,82 @@ +#include +#include + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg2e16.c new file mode 100644 index 000000000..e15d577ae --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg2e16.c @@ -0,0 +1,129 @@ +#include +#include + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg3e16.c new file mode 100644 index 000000000..65cbc96f4 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg3e16.c @@ -0,0 +1,105 @@ +#include +#include + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg4e16.c new file mode 100644 index 000000000..7721cf1d2 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg4e16.c @@ -0,0 +1,105 @@ +#include +#include + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg5e16.c new file mode 100644 index 000000000..d6df0b2bd --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg5e16.c @@ -0,0 +1,81 @@ +#include +#include + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg6e16.c new file mode 100644 index 000000000..27c1cbd88 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg6e16.c @@ -0,0 +1,81 @@ +#include +#include + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg7e16.c new file mode 100644 index 000000000..872b2f0d0 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg7e16.c @@ -0,0 +1,81 @@ +#include +#include + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg8e16.c new file mode 100644 index 000000000..cee5491c5 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vlsseg8e16.c @@ -0,0 +1,81 @@ +#include +#include + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vluxei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vluxei16.c new file mode 100644 index 000000000..2b61e3f6d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vluxei16.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m1_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg2ei16.c new file mode 100644 index 000000000..4c4852bf6 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg2ei16.c @@ -0,0 +1,139 @@ +#include +#include + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg3ei16.c new file mode 100644 index 000000000..2ddb3a2ff --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg3ei16.c @@ -0,0 +1,113 @@ +#include +#include + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg4ei16.c new file mode 100644 index 000000000..c26f49f3d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg4ei16.c @@ -0,0 +1,113 @@ +#include +#include + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg5ei16.c new file mode 100644 index 000000000..10e15cfcf --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg5ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg6ei16.c new file mode 100644 index 000000000..618ec0ca1 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg6ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg7ei16.c new file mode 100644 index 000000000..aca74804f --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg7ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg8ei16.c new file mode 100644 index 000000000..9c7f8a09e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vluxseg8ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c new file mode 100644 index 000000000..7f867a33a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c @@ -0,0 +1,103 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vle16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4_tu(vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2_tu(vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1_tu(vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_tu(vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_tu(vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_tu(vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2_tum(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1_tum(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_tum(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_tum(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_tumu(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf4_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16mf2_mu(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m1_mu(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m2_mu(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m4_mu(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vle16_v_bf16m8_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c new file mode 100644 index 000000000..7d322392b --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c @@ -0,0 +1,103 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m1_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m8_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m1_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m8_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m1_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m8_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16mf2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m1_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vle16ff_v_bf16m8_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c new file mode 100644 index 000000000..e4b825dc0 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c @@ -0,0 +1,102 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m1_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c new file mode 100644 index 000000000..ca3b6fabd --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c @@ -0,0 +1,86 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c new file mode 100644 index 000000000..00079d8f3 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c @@ -0,0 +1,70 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c new file mode 100644 index 000000000..82216a3e8 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c @@ -0,0 +1,70 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c new file mode 100644 index 000000000..b58c9d736 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c new file mode 100644 index 000000000..fd1a29cb5 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c new file mode 100644 index 000000000..006dbe150 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c new file mode 100644 index 000000000..31cfcbcb6 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c new file mode 100644 index 000000000..c92aeac61 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c @@ -0,0 +1,102 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m1_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c new file mode 100644 index 000000000..e1d4f2021 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c @@ -0,0 +1,86 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_tu(vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_tu(vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_tu(vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_tu(vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_tu(vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_tum(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf4x2_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16mf2x2_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m1x2_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m2x2_mu(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_v_bf16m4x2_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c new file mode 100644 index 000000000..40d3eaef2 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c @@ -0,0 +1,87 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf4x2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16mf2x2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m1x2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m2x2_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_v_bf16m4x2_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c new file mode 100644 index 000000000..0f91bc862 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c @@ -0,0 +1,70 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_tu(vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_tu(vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_tu(vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_tu(vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf4x3_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16mf2x3_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m1x3_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_v_bf16m2x3_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c new file mode 100644 index 000000000..d3c99c86b --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c @@ -0,0 +1,71 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf4x3_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16mf2x3_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m1x3_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_v_bf16m2x3_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c new file mode 100644 index 000000000..7ce765f65 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c @@ -0,0 +1,70 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_tu(vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_tu(vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_tu(vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_tu(vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf4x4_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16mf2x4_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m1x4_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_v_bf16m2x4_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c new file mode 100644 index 000000000..9ebbf8c2a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c @@ -0,0 +1,71 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf4x4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16mf2x4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m1x4_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_v_bf16m2x4_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c new file mode 100644 index 000000000..7867d0dd2 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_tu(vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_tu(vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_tu(vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf4x5_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16mf2x5_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_v_bf16m1x5_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c new file mode 100644 index 000000000..100a3d306 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c @@ -0,0 +1,55 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf4x5_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16mf2x5_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_v_bf16m1x5_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c new file mode 100644 index 000000000..a7db2bf57 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_tu(vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_tu(vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_tu(vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf4x6_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16mf2x6_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_v_bf16m1x6_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c new file mode 100644 index 000000000..875b6cf36 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c @@ -0,0 +1,55 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf4x6_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16mf2x6_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_v_bf16m1x6_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c new file mode 100644 index 000000000..25027618d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_tu(vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_tu(vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_tu(vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf4x7_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16mf2x7_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_v_bf16m1x7_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c new file mode 100644 index 000000000..81dcc21e5 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c @@ -0,0 +1,55 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf4x7_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16mf2x7_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_v_bf16m1x7_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c new file mode 100644 index 000000000..3ce55cfef --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_tu(vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_tu(vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_tu(vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf4x8_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16mf2x8_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_v_bf16m1x8_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c new file mode 100644 index 000000000..fcdd9e2a9 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c @@ -0,0 +1,55 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf4x8_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16mf2x8_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_v_bf16m1x8_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c new file mode 100644 index 000000000..170f32729 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c @@ -0,0 +1,86 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c new file mode 100644 index 000000000..9ce1303db --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c @@ -0,0 +1,70 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c new file mode 100644 index 000000000..61987c255 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c @@ -0,0 +1,70 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c new file mode 100644 index 000000000..016a97c3b --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c new file mode 100644 index 000000000..07183d1f2 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c new file mode 100644 index 000000000..f3168d419 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c new file mode 100644 index 000000000..1ec2c9ad3 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c new file mode 100644 index 000000000..771f246bd --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c @@ -0,0 +1,102 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m1_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c new file mode 100644 index 000000000..53889a648 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c @@ -0,0 +1,86 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c new file mode 100644 index 000000000..fdbf90f7e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c @@ -0,0 +1,70 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c new file mode 100644 index 000000000..f43a84004 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c @@ -0,0 +1,70 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c new file mode 100644 index 000000000..d3b4e0e9d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c new file mode 100644 index 000000000..7aea5181d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c new file mode 100644 index 000000000..b1b054efc --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c new file mode 100644 index 000000000..f51f026b4 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c @@ -0,0 +1,54 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c new file mode 100644 index 000000000..22e3de754 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c @@ -0,0 +1,127 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vle16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c new file mode 100644 index 000000000..833af8360 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c @@ -0,0 +1,145 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c new file mode 100644 index 000000000..055a93c69 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c @@ -0,0 +1,144 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c new file mode 100644 index 000000000..a362719ed --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c @@ -0,0 +1,143 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c new file mode 100644 index 000000000..1583c701e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c @@ -0,0 +1,117 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c new file mode 100644 index 000000000..1fe84c1e0 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c @@ -0,0 +1,117 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c new file mode 100644 index 000000000..f40b058f9 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c @@ -0,0 +1,91 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c new file mode 100644 index 000000000..21658c08e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c @@ -0,0 +1,91 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c new file mode 100644 index 000000000..ef77dd579 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c @@ -0,0 +1,91 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c new file mode 100644 index 000000000..656f0c9cf --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c @@ -0,0 +1,91 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c new file mode 100644 index 000000000..f8c41d6d8 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c @@ -0,0 +1,144 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c new file mode 100644 index 000000000..b9f222a67 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c @@ -0,0 +1,112 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c new file mode 100644 index 000000000..f4da7648e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c @@ -0,0 +1,137 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c new file mode 100644 index 000000000..ada8862cc --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c @@ -0,0 +1,92 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tu(vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tu(vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c new file mode 100644 index 000000000..39ba8bc15 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c @@ -0,0 +1,112 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c new file mode 100644 index 000000000..c99a67e18 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c @@ -0,0 +1,92 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tu(vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tu(vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c new file mode 100644 index 000000000..21b323760 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c @@ -0,0 +1,112 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c new file mode 100644 index 000000000..557fb3830 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c @@ -0,0 +1,72 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tu(vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c new file mode 100644 index 000000000..6448d8224 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c @@ -0,0 +1,87 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c new file mode 100644 index 000000000..bcf162249 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c @@ -0,0 +1,72 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tu(vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c new file mode 100644 index 000000000..b86639e89 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c @@ -0,0 +1,87 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c new file mode 100644 index 000000000..75ee14121 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c @@ -0,0 +1,72 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tu(vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c new file mode 100644 index 000000000..a1055a16b --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c @@ -0,0 +1,87 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c new file mode 100644 index 000000000..422a03063 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c @@ -0,0 +1,72 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tu(vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c new file mode 100644 index 000000000..8f481a921 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c @@ -0,0 +1,87 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c new file mode 100644 index 000000000..b2425cea0 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c @@ -0,0 +1,133 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c new file mode 100644 index 000000000..f5de0447d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c @@ -0,0 +1,109 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c new file mode 100644 index 000000000..5ac99c723 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c @@ -0,0 +1,109 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c new file mode 100644 index 000000000..5ae1c9339 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c @@ -0,0 +1,85 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c new file mode 100644 index 000000000..10ee4813e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c @@ -0,0 +1,85 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c new file mode 100644 index 000000000..f470b0411 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c @@ -0,0 +1,85 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c new file mode 100644 index 000000000..4d39fd4b3 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c @@ -0,0 +1,85 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c new file mode 100644 index 000000000..292f41ef5 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c @@ -0,0 +1,144 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c new file mode 100644 index 000000000..424a7553c --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c @@ -0,0 +1,143 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c new file mode 100644 index 000000000..fb030ff84 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c @@ -0,0 +1,117 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c new file mode 100644 index 000000000..1d7889b7e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c @@ -0,0 +1,117 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c new file mode 100644 index 000000000..c3d3a8930 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c @@ -0,0 +1,91 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c new file mode 100644 index 000000000..925c82dd7 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c @@ -0,0 +1,91 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c new file mode 100644 index 000000000..b8826ce6a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c @@ -0,0 +1,91 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c new file mode 100644 index 000000000..2f3d5d06a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c @@ -0,0 +1,91 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vle16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vle16.c new file mode 100644 index 000000000..c62108cd5 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vle16.c @@ -0,0 +1,122 @@ +#include +#include + +vbfloat16mf4_t test_vle16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t vl) { + return __riscv_vle16_tu(vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4_t test_vle16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2_t test_vle16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1_t test_vle16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16m2_t test_vle16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16m4_t test_vle16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} + +vbfloat16m8_t test_vle16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vle16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vle16ff.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vle16ff.c new file mode 100644 index 000000000..8311d7fa1 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vle16ff.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vle16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4_t test_vle16ff_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2_t test_vle16ff_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1_t test_vle16ff_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2_t test_vle16ff_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4_t test_vle16ff_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m8_t test_vle16ff_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { + return __riscv_vle16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxei16.c new file mode 100644 index 000000000..053e6dd94 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxei16.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vloxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vloxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vloxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vloxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vloxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vloxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vloxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vloxei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg2ei16.c new file mode 100644 index 000000000..cebce8595 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg2ei16.c @@ -0,0 +1,139 @@ +#include +#include + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vloxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg3ei16.c new file mode 100644 index 000000000..7dc1de409 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg3ei16.c @@ -0,0 +1,113 @@ +#include +#include + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg4ei16.c new file mode 100644 index 000000000..a8db59018 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg4ei16.c @@ -0,0 +1,113 @@ +#include +#include + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vloxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg5ei16.c new file mode 100644 index 000000000..28cb437cb --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg5ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg6ei16.c new file mode 100644 index 000000000..9745d16e8 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg6ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg7ei16.c new file mode 100644 index 000000000..6b64fef2c --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg7ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg8ei16.c new file mode 100644 index 000000000..e5b6607d2 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vloxseg8ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vloxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vloxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlse16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlse16.c new file mode 100644 index 000000000..f31b6dae1 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlse16.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlse16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vlse16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vlse16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vlse16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vlse16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vlse16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vlse16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlse16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg2e16.c new file mode 100644 index 000000000..adf0bcfd7 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg2e16.c @@ -0,0 +1,108 @@ +#include +#include + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg2e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg2e16ff.c new file mode 100644 index 000000000..94daad69a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg2e16ff.c @@ -0,0 +1,132 @@ +#include +#include + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg2e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg3e16.c new file mode 100644 index 000000000..cf0d583ff --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg3e16.c @@ -0,0 +1,88 @@ +#include +#include + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tu(vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tu(vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg3e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg3e16ff.c new file mode 100644 index 000000000..24a610a5d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg3e16ff.c @@ -0,0 +1,107 @@ +#include +#include + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg3e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg4e16.c new file mode 100644 index 000000000..a0311857a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg4e16.c @@ -0,0 +1,88 @@ +#include +#include + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tu(vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tu(vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg4e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg4e16ff.c new file mode 100644 index 000000000..cc7cd3e8f --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg4e16ff.c @@ -0,0 +1,107 @@ +#include +#include + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg4e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg5e16.c new file mode 100644 index 000000000..07e8b5d4e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg5e16.c @@ -0,0 +1,68 @@ +#include +#include + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tu(vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg5e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg5e16ff.c new file mode 100644 index 000000000..e13f2ef80 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg5e16ff.c @@ -0,0 +1,82 @@ +#include +#include + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg5e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg6e16.c new file mode 100644 index 000000000..58af0751a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg6e16.c @@ -0,0 +1,68 @@ +#include +#include + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tu(vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg6e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg6e16ff.c new file mode 100644 index 000000000..b27f2357d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg6e16ff.c @@ -0,0 +1,82 @@ +#include +#include + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg6e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg7e16.c new file mode 100644 index 000000000..4bfca3c35 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg7e16.c @@ -0,0 +1,68 @@ +#include +#include + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tu(vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg7e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg7e16ff.c new file mode 100644 index 000000000..af9b65e7e --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg7e16ff.c @@ -0,0 +1,82 @@ +#include +#include + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg7e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg8e16.c new file mode 100644 index 000000000..653938350 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg8e16.c @@ -0,0 +1,68 @@ +#include +#include + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tu(vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tu(vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tu(vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tum(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tum(vm, vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_tumu(vm, vd, rs1, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_mu(vm, vd, rs1, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_mu(vm, vd, rs1, vl); +} + +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { + return __riscv_vlseg8e16_mu(vm, vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg8e16ff.c new file mode 100644 index 000000000..a4c013385 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlseg8e16ff.c @@ -0,0 +1,82 @@ +#include +#include + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tu(vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tum(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_tumu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_mu(vm, vd, rs1, new_vl, vl); +} + +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { + return __riscv_vlseg8e16ff_mu(vm, vd, rs1, new_vl, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg2e16.c new file mode 100644 index 000000000..8fb0cd0fb --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg2e16.c @@ -0,0 +1,129 @@ +#include +#include + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg2e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg3e16.c new file mode 100644 index 000000000..cd0bf487b --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg3e16.c @@ -0,0 +1,105 @@ +#include +#include + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg3e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg3e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg4e16.c new file mode 100644 index 000000000..533804a3f --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg4e16.c @@ -0,0 +1,105 @@ +#include +#include + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg4e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg4e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg5e16.c new file mode 100644 index 000000000..677e6f2ec --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg5e16.c @@ -0,0 +1,81 @@ +#include +#include + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg5e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg5e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg6e16.c new file mode 100644 index 000000000..bdae126e0 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg6e16.c @@ -0,0 +1,81 @@ +#include +#include + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg6e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg6e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg7e16.c new file mode 100644 index 000000000..efd8b3a9d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg7e16.c @@ -0,0 +1,81 @@ +#include +#include + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg7e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg7e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg8e16.c new file mode 100644 index 000000000..97fd79283 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vlsseg8e16.c @@ -0,0 +1,81 @@ +#include +#include + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { + return __riscv_vlsseg8e16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { + return __riscv_vlsseg8e16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxei16.c new file mode 100644 index 000000000..226dec981 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxei16.c @@ -0,0 +1,140 @@ +#include +#include + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { + return __riscv_vluxei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4_t test_vluxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2_t test_vluxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1_t test_vluxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2_t test_vluxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4_t test_vluxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m8_t test_vluxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { + return __riscv_vluxei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg2ei16.c new file mode 100644 index 000000000..dec0690d9 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg2ei16.c @@ -0,0 +1,139 @@ +#include +#include + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { + return __riscv_vluxseg2ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg3ei16.c new file mode 100644 index 000000000..127d97bb5 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg3ei16.c @@ -0,0 +1,113 @@ +#include +#include + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg3ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg4ei16.c new file mode 100644 index 000000000..387738336 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg4ei16.c @@ -0,0 +1,113 @@ +#include +#include + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { + return __riscv_vluxseg4ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg5ei16.c new file mode 100644 index 000000000..e44715aab --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg5ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg5ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg6ei16.c new file mode 100644 index 000000000..86655a32a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg6ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg6ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg7ei16.c new file mode 100644 index 000000000..f0473d13d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg7ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg7ei16_mu(vm, vd, rs1, rs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg8ei16.c new file mode 100644 index 000000000..07ed8156f --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vluxseg8ei16.c @@ -0,0 +1,87 @@ +#include +#include + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tu(vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tum(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { + return __riscv_vluxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_tumu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} + +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { + return __riscv_vluxseg8ei16_mu(vm, vd, rs1, rs2, vl); +} From 24b6819c3916560d6ce3091478e57ccb1a677e13 Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 10:30:33 -0700 Subject: [PATCH 006/151] Define BFloat16 convert intrinsics vfncvtbf16.f.f.w vd, vs2, vm vfwcvtbf16.f.f.v vd, vs2, vm --- .../rvv_intrinsic_gen/bfloat16_inst.py | 16 ++++++++- .../templates/cvt_op_template.py | 33 +++++++++++++++---- 2 files changed, 41 insertions(+), 8 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py index 771f4fb14..d391b9bde 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py @@ -29,9 +29,11 @@ from templates import reint_op_template from templates import get_set_diff_lmul_op_template from templates import misc_op_template -from constants import LMULS +from templates import cvt_op_template +from constants import LMULS, WLMULS, NCVTLMULS SEWS = [16] +NSEWS = [32] TYPES = ["bfloat"] @@ -103,6 +105,18 @@ def gen(g): "vector-indexed-segment-store", ["vsoxseg", "vsuxseg"], TYPES, SEWS, LMULS, decorators.has_masking_no_maskedoff) + #################################################################### + g.start_group("BFloat16 Convert Intrinsics") + + g.function_group(cvt_op_template, "Vector Narrowing Convert Intrinsics", + "bf16-vector-narrow-convert", ["ncvtbf16"], "bfloat16", + NSEWS, NCVTLMULS, + decorators.has_masking_maskedoff_policy_frm) + + g.function_group(cvt_op_template, "Vector Widening Convert Intrinsics", + "bf16-vector-widening-convert", ["wcvtbf16"], "bfloat16", + SEWS, WLMULS, decorators.has_masking_maskedoff_policy) + #################################################################### g.start_group("BFloat16 Miscellaneous Vector Utility Intrinsics") diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py index 48b0a62e7..512a7fe75 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py @@ -40,6 +40,13 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): # [dst_type, dst_type_short, src_type, src_type_short] if type_list == ITYPES: convert_set = [["int", "x", "int", "x"], ["uint", "x", "uint", "x"]] + elif type_list == "bfloat16": + if "ncvtbf16" in op_list: + convert_set = [["bfloat", "bf", "float", "f"]] + elif "wcvtbf16" in op_list: + convert_set = [["float", "f", "bfloat", "bf"]] + else: + assert False, "Unhandled instruction with type_list = 'bfloat16'" else: convert_set = [["int", "x", "float", "f"], ["uint", "xu", "float", "f"], ["float", "f", "int", "x"], ["float", "f", "uint", "xu"], @@ -63,7 +70,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): # A double-width IEEE floating-point value can always represent a # single-width IEEE floating-point value exactly. # So we don't need frm variant for vfwcvt.f.f, and vfwcvt.f.x(u) here - if op == "wcvt" and decorator.flags & ExtraAttr.HAS_FRM and\ + if "wcvt" in op and decorator.flags & ExtraAttr.HAS_FRM and\ (args["TYPES0"] == args["TYPES2"] or\ ("float" in args["TYPES0"] and "int" in args["TYPES2"])): continue @@ -75,16 +82,16 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): args["MIDDLE"] = "v" factor = "" - if op == "wcvt": + if "wcvt" in op: factor = "W" - if op == "ncvt": + if "ncvt" in op: factor = "N" args["MIDDLE"] = "w" args["LLMUL"] = args[factor + "LMUL"] args["LSEW"] = args[factor + "SEW"] - if args["TYPES1"] == "f" or args["TYPES3"] == "f": + if "f" in args["TYPES1"] or "f" in args["TYPES3"]: args["OP"] = "f" + args["OP"] if args["TYPES0"] == "uint": @@ -115,9 +122,17 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): if not type_helper.valid_vtype(dst_type) or\ not type_helper.valid_vtype(src_type): continue - func_name = \ - "{OP}_{TYPES1}_{TYPES3}_{MIDDLE}_{D_TYPE}{LSEW}m{LLMUL}".format_map\ - (args) + if type_list == "bfloat16": + if "ncvt" in args["OP"]: + func_name = "{OP}_f_f_w_bf{LSEW}m{LLMUL}".format_map(args) + elif "wcvt" in args["OP"]: + func_name = "{OP}_f_f_v_f{LSEW}m{LLMUL}".format_map(args) + else: + assert False, "Unhandled instruction for bfloat16 type" + else: + func_name = \ + "{OP}_{TYPES1}_{TYPES3}_{MIDDLE}_{D_TYPE}{LSEW}m{LLMUL}".format_map\ + (args) G.func( inst_info, name=func_name + decorator.func_suffix, @@ -134,6 +149,10 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): if decorator.flags & ExtraAttr.HAS_FRM: continue + # BFloat16 converts do not have `_rod`/`_rtz` instructions + if type_list == "bfloat16": + continue + if args["TYPES1"] != args["TYPES3"] and args["TYPES3"] == "f": args["OP"] = args["OP"] + "_rtz" inst_info = InstInfo.get( From 4c77e23687566c55aecfcb3d1a2ecf1a68811b2d Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 10:37:03 -0700 Subject: [PATCH 007/151] [Auto-gen] Update bfloat16 documents under ../auto-generated. (make git-commit-autogen-bf16-doc) --- auto-generated/bfloat16/intrinsic_funcs.adoc | 76 ++++++ .../02_bfloat16_convert_intrinsics.adoc | 76 ++++++ ...cellaneous_vector_utility_intrinsics.adoc} | 0 .../bfloat16/overloaded_intrinsic_funcs.adoc | 59 +++++ .../02_bfloat16_convert_intrinsics.adoc | 59 +++++ ...cellaneous_vector_utility_intrinsics.adoc} | 0 .../policy_funcs/intrinsic_funcs.adoc | 220 ++++++++++++++++++ .../02_bfloat16_convert_intrinsics.adoc | 220 ++++++++++++++++++ ...cellaneous_vector_utility_intrinsics.adoc} | 0 .../overloaded_intrinsic_funcs.adoc | 160 +++++++++++++ .../02_bfloat16_convert_intrinsics.adoc | 160 +++++++++++++ ...cellaneous_vector_utility_intrinsics.adoc} | 0 12 files changed, 1030 insertions(+) create mode 100644 auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc rename auto-generated/bfloat16/intrinsic_funcs/{02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc => 03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc} (100%) create mode 100644 auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc rename auto-generated/bfloat16/overloaded_intrinsic_funcs/{02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc => 03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc} (100%) create mode 100644 auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc rename auto-generated/bfloat16/policy_funcs/intrinsic_funcs/{02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc => 03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc} (100%) create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc rename auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/{02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc => 03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc} (100%) diff --git a/auto-generated/bfloat16/intrinsic_funcs.adoc b/auto-generated/bfloat16/intrinsic_funcs.adoc index 1ac981ad9..f9304847d 100644 --- a/auto-generated/bfloat16/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs.adoc @@ -1338,6 +1338,82 @@ void __riscv_vsuxseg2ei16_v_bf16m4x2_m(vbool4_t vm, __bf16 *rs1, size_t vl); ---- +=== BFloat16 Convert Intrinsics + +[[bf16-vector-narrow-convert]] +==== Vector Narrowing Convert Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4(vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2(vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1(vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2(vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4(vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_m(vbool64_t vm, + vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_m(vbool32_t vm, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl); +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm(vfloat32mf2_t vs2, + unsigned int frm, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm(vfloat32m1_t vs2, + unsigned int frm, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm(vfloat32m2_t vs2, + unsigned int frm, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm(vfloat32m4_t vs2, + unsigned int frm, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm(vfloat32m8_t vs2, + unsigned int frm, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_m(vbool64_t vm, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_m(vbool32_t vm, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_m(vbool16_t vm, + vfloat32m2_t vs2, + unsigned int frm, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_m(vbool8_t vm, + vfloat32m4_t vs2, + unsigned int frm, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_m(vbool4_t vm, + vfloat32m8_t vs2, + unsigned int frm, size_t vl); +---- + +[[bf16-vector-widening-convert]] +==== Vector Widening Convert Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2(vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1(vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2(vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4(vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8(vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_m(vbool64_t vm, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_m(vbool32_t vm, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_m(vbool16_t vm, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_m(vbool8_t vm, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_m(vbool4_t vm, vbfloat16m4_t vs2, + size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc new file mode 100644 index 000000000..a6e7b0277 --- /dev/null +++ b/auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc @@ -0,0 +1,76 @@ + +=== BFloat16 Convert Intrinsics + +[[bf16-vector-narrow-convert]] +==== Vector Narrowing Convert Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4(vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2(vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1(vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2(vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4(vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_m(vbool64_t vm, + vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_m(vbool32_t vm, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl); +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm(vfloat32mf2_t vs2, + unsigned int frm, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm(vfloat32m1_t vs2, + unsigned int frm, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm(vfloat32m2_t vs2, + unsigned int frm, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm(vfloat32m4_t vs2, + unsigned int frm, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm(vfloat32m8_t vs2, + unsigned int frm, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_m(vbool64_t vm, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_m(vbool32_t vm, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_m(vbool16_t vm, + vfloat32m2_t vs2, + unsigned int frm, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_m(vbool8_t vm, + vfloat32m4_t vs2, + unsigned int frm, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_m(vbool4_t vm, + vfloat32m8_t vs2, + unsigned int frm, size_t vl); +---- + +[[bf16-vector-widening-convert]] +==== Vector Widening Convert Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2(vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1(vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2(vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4(vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8(vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_m(vbool64_t vm, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_m(vbool32_t vm, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_m(vbool16_t vm, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_m(vbool8_t vm, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_m(vbool4_t vm, vbfloat16m4_t vs2, + size_t vl); +---- diff --git a/auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc similarity index 100% rename from auto-generated/bfloat16/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc rename to auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc index c00a11ebb..9692805cf 100644 --- a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc @@ -951,6 +951,65 @@ void __riscv_vsuxseg2ei16(vbool4_t vm, __bf16 *rs1, vuint16m4_t vs2, vbfloat16m4x2_t vs3, size_t vl); ---- +=== BFloat16 Convert Intrinsics + +[[overloaded-bf16-vector-narrow-convert]] +==== Vector Narrowing Convert Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vfncvtbf16_f(vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f(vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f(vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f(vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f(vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f(vbool64_t vm, vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f(vbool32_t vm, vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f(vbool16_t vm, vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f(vbool8_t vm, vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f(vbool4_t vm, vfloat32m8_t vs2, size_t vl); +vbfloat16mf4_t __riscv_vfncvtbf16_f(vfloat32mf2_t vs2, unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f(vfloat32m1_t vs2, unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f(vfloat32m2_t vs2, unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f(vfloat32m4_t vs2, unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f(vfloat32m8_t vs2, unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f(vbool64_t vm, vfloat32mf2_t vs2, + unsigned int frm, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f(vbool32_t vm, vfloat32m1_t vs2, + unsigned int frm, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f(vbool16_t vm, vfloat32m2_t vs2, + unsigned int frm, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f(vbool8_t vm, vfloat32m4_t vs2, + unsigned int frm, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f(vbool4_t vm, vfloat32m8_t vs2, + unsigned int frm, size_t vl); +---- + +[[overloaded-bf16-vector-widening-convert]] +==== Vector Widening Convert Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwcvtbf16_f(vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f(vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f(vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f(vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f(vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f(vbool64_t vm, vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f(vbool32_t vm, vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f(vbool16_t vm, vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f(vbool8_t vm, vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f(vbool4_t vm, vbfloat16m4_t vs2, size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[overloaded-reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc new file mode 100644 index 000000000..151c6c4ec --- /dev/null +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc @@ -0,0 +1,59 @@ + +=== BFloat16 Convert Intrinsics + +[[overloaded-bf16-vector-narrow-convert]] +==== Vector Narrowing Convert Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vfncvtbf16_f(vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f(vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f(vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f(vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f(vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f(vbool64_t vm, vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f(vbool32_t vm, vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f(vbool16_t vm, vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f(vbool8_t vm, vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f(vbool4_t vm, vfloat32m8_t vs2, size_t vl); +vbfloat16mf4_t __riscv_vfncvtbf16_f(vfloat32mf2_t vs2, unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f(vfloat32m1_t vs2, unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f(vfloat32m2_t vs2, unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f(vfloat32m4_t vs2, unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f(vfloat32m8_t vs2, unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f(vbool64_t vm, vfloat32mf2_t vs2, + unsigned int frm, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f(vbool32_t vm, vfloat32m1_t vs2, + unsigned int frm, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f(vbool16_t vm, vfloat32m2_t vs2, + unsigned int frm, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f(vbool8_t vm, vfloat32m4_t vs2, + unsigned int frm, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f(vbool4_t vm, vfloat32m8_t vs2, + unsigned int frm, size_t vl); +---- + +[[overloaded-bf16-vector-widening-convert]] +==== Vector Widening Convert Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwcvtbf16_f(vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f(vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f(vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f(vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f(vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f(vbool64_t vm, vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f(vbool32_t vm, vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f(vbool16_t vm, vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f(vbool8_t vm, vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f(vbool4_t vm, vbfloat16m4_t vs2, size_t vl); +---- diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc similarity index 100% rename from auto-generated/bfloat16/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc rename to auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc index 25c99db86..78157d29a 100644 --- a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc @@ -2362,6 +2362,226 @@ vbfloat16m4x2_t __riscv_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, ==== Vector Indexed Segment Store Intrinsics Intrinsics here don't have a policy variant. +=== BFloat16 Convert Intrinsics + +[[policy-variant-bf16-vector-narrow-convert]] +==== Vector Narrowing Convert Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_tum(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_tumu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_tumu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_tumu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_mu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_mu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, + unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tum(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tum(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tum(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_mu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_mu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_mu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + unsigned int frm, + size_t vl); +---- + +[[policy-variant-bf16-vector-widening-convert]] +==== Vector Widening Convert Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_tu(vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_tu(vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_tu(vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_tum(vbool64_t vm, + vfloat32mf2_t vd, + vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_tumu(vbool64_t vm, + vfloat32mf2_t vd, + vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[policy-variant-reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc new file mode 100644 index 000000000..c807ad197 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc @@ -0,0 +1,220 @@ + +=== BFloat16 Convert Intrinsics + +[[policy-variant-bf16-vector-narrow-convert]] +==== Vector Narrowing Convert Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_tum(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_tumu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_tumu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_tumu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_mu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_mu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, + unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tum(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tum(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tum(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_f_w_bf16m1_rm_mu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_f_w_bf16m2_rm_mu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_f_w_bf16m4_rm_mu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + unsigned int frm, + size_t vl); +---- + +[[policy-variant-bf16-vector-widening-convert]] +==== Vector Widening Convert Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_tu(vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_tu(vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_tu(vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_tum(vbool64_t vm, + vfloat32mf2_t vd, + vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_tumu(vbool64_t vm, + vfloat32mf2_t vd, + vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +---- diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc similarity index 100% rename from auto-generated/bfloat16/policy_funcs/intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc rename to auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc index 99bd83e3b..8f77e40d0 100644 --- a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc @@ -1677,6 +1677,166 @@ vbfloat16m4x2_t __riscv_vluxseg2ei16_mu(vbool4_t vm, vbfloat16m4x2_t vd, ==== Vector Indexed Segment Store Intrinsics Intrinsics here don't have a policy variant. +=== BFloat16 Convert Intrinsics + +[[policy-variant-overloadedbf16-vector-narrow-convert]] +==== Vector Narrowing Convert Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vfncvtbf16_f_tu(vbfloat16mf4_t vd, vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tu(vbfloat16mf2_t vd, vfloat32m1_t vs2, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tu(vbfloat16m1_t vd, vfloat32m2_t vs2, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tu(vbfloat16m2_t vd, vfloat32m4_t vs2, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tu(vbfloat16m4_t vd, vfloat32m8_t vs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_tum(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tum(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tum(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_tumu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tumu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tumu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tumu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tumu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_mu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_mu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +vbfloat16mf4_t __riscv_vfncvtbf16_f_tu(vbfloat16mf4_t vd, vfloat32mf2_t vs2, + unsigned int frm, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tu(vbfloat16mf2_t vd, vfloat32m1_t vs2, + unsigned int frm, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tu(vbfloat16m1_t vd, vfloat32m2_t vs2, + unsigned int frm, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tu(vbfloat16m2_t vd, vfloat32m4_t vs2, + unsigned int frm, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tu(vbfloat16m4_t vd, vfloat32m8_t vs2, + unsigned int frm, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_tum(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tum(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tum(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_tumu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tumu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tumu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tumu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tumu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_mu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_mu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, unsigned int frm, + size_t vl); +---- + +[[policy-variant-overloadedbf16-vector-widening-convert]] +==== Vector Widening Convert Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwcvtbf16_f_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_tu(vfloat32m1_t vd, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_tu(vfloat32m2_t vd, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_tu(vfloat32m4_t vd, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_tu(vfloat32m8_t vd, vbfloat16m4_t vs2, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[policy-variant-overloadedreinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc new file mode 100644 index 000000000..94b1ff8f3 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_convert_intrinsics.adoc @@ -0,0 +1,160 @@ + +=== BFloat16 Convert Intrinsics + +[[policy-variant-overloadedbf16-vector-narrow-convert]] +==== Vector Narrowing Convert Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vfncvtbf16_f_tu(vbfloat16mf4_t vd, vfloat32mf2_t vs2, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tu(vbfloat16mf2_t vd, vfloat32m1_t vs2, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tu(vbfloat16m1_t vd, vfloat32m2_t vs2, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tu(vbfloat16m2_t vd, vfloat32m4_t vs2, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tu(vbfloat16m4_t vd, vfloat32m8_t vs2, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_tum(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tum(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tum(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_tumu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tumu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tumu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tumu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tumu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_mu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_mu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl); +vbfloat16mf4_t __riscv_vfncvtbf16_f_tu(vbfloat16mf4_t vd, vfloat32mf2_t vs2, + unsigned int frm, size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tu(vbfloat16mf2_t vd, vfloat32m1_t vs2, + unsigned int frm, size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tu(vbfloat16m1_t vd, vfloat32m2_t vs2, + unsigned int frm, size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tu(vbfloat16m2_t vd, vfloat32m4_t vs2, + unsigned int frm, size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tu(vbfloat16m4_t vd, vfloat32m8_t vs2, + unsigned int frm, size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_tum(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tum(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tum(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_tumu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_tumu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_tumu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_tumu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_tumu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, unsigned int frm, + size_t vl); +// masked functions +vbfloat16mf4_t __riscv_vfncvtbf16_f_mu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, unsigned int frm, + size_t vl); +vbfloat16mf2_t __riscv_vfncvtbf16_f_mu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, unsigned int frm, + size_t vl); +vbfloat16m1_t __riscv_vfncvtbf16_f_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, unsigned int frm, + size_t vl); +vbfloat16m2_t __riscv_vfncvtbf16_f_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, unsigned int frm, + size_t vl); +vbfloat16m4_t __riscv_vfncvtbf16_f_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, unsigned int frm, + size_t vl); +---- + +[[policy-variant-overloadedbf16-vector-widening-convert]] +==== Vector Widening Convert Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwcvtbf16_f_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_tu(vfloat32m1_t vd, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_tu(vfloat32m2_t vd, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_tu(vfloat32m4_t vd, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_tu(vfloat32m8_t vd, vbfloat16m4_t vs2, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwcvtbf16_f_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwcvtbf16_f_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwcvtbf16_f_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwcvtbf16_f_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwcvtbf16_f_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl); +---- diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc similarity index 100% rename from auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/02_bfloat16_miscellaneous_vector_utility_intrinsics.adoc rename to auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc From 38a351a6ec5d233ba56305c978e3e2db01cb8c59 Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 10:37:04 -0700 Subject: [PATCH 008/151] [Auto-gen] Update bfloat16 tests under ../auto-generated. (make git-commit-autogen-bf16-test) --- .../bfloat16/api-testing/vfncvtbf16.c | 92 +++++++ .../bfloat16/api-testing/vfwcvtbf16.c | 47 ++++ .../bfloat16/llvm-api-tests/vfncvtbf16.c | 97 +++++++ .../bfloat16/llvm-api-tests/vfwcvtbf16.c | 52 ++++ .../llvm-overloaded-tests/vfncvtbf16.c | 97 +++++++ .../llvm-overloaded-tests/vfwcvtbf16.c | 52 ++++ .../overloaded-api-testing/vfncvtbf16.c | 92 +++++++ .../overloaded-api-testing/vfwcvtbf16.c | 47 ++++ .../policy_funcs/api-testing/vfncvtbf16.c | 243 ++++++++++++++++++ .../policy_funcs/api-testing/vfwcvtbf16.c | 102 ++++++++ .../policy_funcs/llvm-api-tests/vfncvtbf16.c | 167 ++++++++++++ .../policy_funcs/llvm-api-tests/vfwcvtbf16.c | 87 +++++++ .../llvm-overloaded-tests/vfncvtbf16.c | 233 +++++++++++++++++ .../llvm-overloaded-tests/vfwcvtbf16.c | 107 ++++++++ .../overloaded-api-testing/vfncvtbf16.c | 228 ++++++++++++++++ .../overloaded-api-testing/vfwcvtbf16.c | 102 ++++++++ 16 files changed, 1845 insertions(+) create mode 100644 auto-generated/bfloat16/api-testing/vfncvtbf16.c create mode 100644 auto-generated/bfloat16/api-testing/vfwcvtbf16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vfncvtbf16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vfwcvtbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vfncvtbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vfwcvtbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfncvtbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfwcvtbf16.c diff --git a/auto-generated/bfloat16/api-testing/vfncvtbf16.c b/auto-generated/bfloat16/api-testing/vfncvtbf16.c new file mode 100644 index 000000000..ca33a95f1 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vfncvtbf16.c @@ -0,0 +1,92 @@ +#include +#include + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4(vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4(vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2(vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2(vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1(vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1(vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2(vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2(vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4(vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4(vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_m(vbool64_t vm, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_m(vm, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_m(vbool32_t vm, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_m(vm, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_m(vm, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_m(vm, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_m(vm, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm(vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm(vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm(vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm(vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm(vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t +test_vfncvtbf16_f_f_w_bf16mf4_rm_m(vbool64_t vm, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_m(vbool32_t vm, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vfwcvtbf16.c b/auto-generated/bfloat16/api-testing/vfwcvtbf16.c new file mode 100644 index 000000000..762fa909d --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vfwcvtbf16.c @@ -0,0 +1,47 @@ +#include +#include + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2(vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2(vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1(vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1(vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2(vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2(vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4(vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4(vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8(vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8(vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_m(vbool64_t vm, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_m(vm, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_m(vbool32_t vm, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_m(vm, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_m(vbool16_t vm, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_m(vm, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_m(vbool8_t vm, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_m(vm, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_m(vbool4_t vm, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_m(vm, vs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c new file mode 100644 index 000000000..758e0275a --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c @@ -0,0 +1,97 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4(vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4(vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2(vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2(vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1(vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1(vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2(vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2(vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4(vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4(vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_m(vbool64_t vm, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_m(vm, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_m(vbool32_t vm, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_m(vm, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_m(vm, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_m(vm, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_m(vm, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm(vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm(vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm(vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm(vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm(vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t +test_vfncvtbf16_f_f_w_bf16mf4_rm_m(vbool64_t vm, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_m(vbool32_t vm, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_m(vm, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c new file mode 100644 index 000000000..3be23d2d7 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c @@ -0,0 +1,52 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2(vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2(vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1(vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1(vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2(vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2(vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4(vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4(vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8(vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8(vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_m(vbool64_t vm, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_m(vm, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_m(vbool32_t vm, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_m(vm, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_m(vbool16_t vm, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_m(vm, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_m(vbool8_t vm, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_m(vm, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_m(vbool4_t vm, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_m(vm, vs2, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c new file mode 100644 index 000000000..cca27ae83 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c @@ -0,0 +1,97 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4(vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2(vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1(vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2(vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4(vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_m(vbool64_t vm, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_m(vbool32_t vm, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm(vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm(vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm(vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm(vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm(vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t +test_vfncvtbf16_f_f_w_bf16mf4_rm_m(vbool64_t vm, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_m(vbool32_t vm, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c new file mode 100644 index 000000000..1668c7b2b --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c @@ -0,0 +1,52 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2(vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1(vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2(vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4(vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8(vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_m(vbool64_t vm, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_m(vbool32_t vm, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_m(vbool16_t vm, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_m(vbool8_t vm, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_m(vbool4_t vm, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vfncvtbf16.c b/auto-generated/bfloat16/overloaded-api-testing/vfncvtbf16.c new file mode 100644 index 000000000..d402fd187 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vfncvtbf16.c @@ -0,0 +1,92 @@ +#include +#include + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4(vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2(vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1(vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2(vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4(vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_m(vbool64_t vm, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_m(vbool32_t vm, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm(vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm(vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm(vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm(vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm(vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t +test_vfncvtbf16_f_f_w_bf16mf4_rm_m(vbool64_t vm, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_m(vbool32_t vm, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_m(vbool16_t vm, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_m(vbool8_t vm, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_m(vbool4_t vm, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f(vm, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vfwcvtbf16.c b/auto-generated/bfloat16/overloaded-api-testing/vfwcvtbf16.c new file mode 100644 index 000000000..9e0306536 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vfwcvtbf16.c @@ -0,0 +1,47 @@ +#include +#include + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2(vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1(vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2(vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4(vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8(vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f(vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_m(vbool64_t vm, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_m(vbool32_t vm, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_m(vbool16_t vm, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_m(vbool8_t vm, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_m(vbool4_t vm, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f(vm, vs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vfncvtbf16.c new file mode 100644 index 000000000..c408c7a42 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vfncvtbf16.c @@ -0,0 +1,243 @@ +#include +#include + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_tu(vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_tu(vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_tu(vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_tu(vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_tu(vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_tum(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_tum(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_tum(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_tum(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_tum(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_tumu(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_tumu(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_tumu(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_tumu(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_tumu(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_mu(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_mu(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_mu(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_mu(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_mu(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tum(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vfwcvtbf16.c new file mode 100644 index 000000000..9ce24c6ba --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vfwcvtbf16.c @@ -0,0 +1,102 @@ +#include +#include + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_tu(vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_tu(vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_tu(vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_tu(vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_tu(vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_tum(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_tum(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_tum(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_tum(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_tum(vm, vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_tumu(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_tumu(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_tumu(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_tumu(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_tumu(vm, vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_mu(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_mu(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_mu(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_mu(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c new file mode 100644 index 000000000..d60ec839a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c @@ -0,0 +1,167 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tu(vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_tu(vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tu(vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_tu(vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tu(vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_tu(vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tu(vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_tu(vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tu(vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_tu(vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_tum(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_tum(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_tum(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_tum(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_tum(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_tumu(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_tumu(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_tumu(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_tumu(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_tumu(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_mu(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_mu(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_mu(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_mu(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_mu(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tu(vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tu(vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tu(vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tum(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tum(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tum(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_mu(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_mu(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_mu(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c new file mode 100644 index 000000000..40457cca7 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c @@ -0,0 +1,87 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_tu(vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_tu(vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_tu(vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_tu(vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_tu(vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_tum(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_tum(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_tum(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_tum(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_tum(vm, vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_tumu(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_tumu(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_tumu(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_tumu(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_tumu(vm, vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32mf2_mu(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m1_mu(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m2_mu(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m4_mu(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_f_v_f32m8_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c new file mode 100644 index 000000000..c7ff40760 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c @@ -0,0 +1,233 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tum(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c new file mode 100644 index 000000000..833cbe02b --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c @@ -0,0 +1,107 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfncvtbf16.c new file mode 100644 index 000000000..9e3542923 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfncvtbf16.c @@ -0,0 +1,228 @@ +#include +#include + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tu(vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tum(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} + +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfwcvtbf16.c new file mode 100644 index 000000000..dbf0a4d7d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfwcvtbf16.c @@ -0,0 +1,102 @@ +#include +#include + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwcvtbf16_f_tu(vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tum(vm, vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_tumu(vm, vd, vs2, vl); +} + +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} + +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} + +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} + +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} + +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwcvtbf16_f_mu(vm, vd, vs2, vl); +} From 392c05fb05ec3f69621d560165ee76e1d9353edb Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 10:54:17 -0700 Subject: [PATCH 009/151] Define BFloat16 widening-accumulate intrinsics vfwmaccbf16.vv vd, vs1, vs2, vm vfwmaccbf16.vf vd, rs1, vs2, vm --- .../rvv_intrinsic_gen/bfloat16_inst.py | 9 +++++++++ .../templates/mac_template.py | 20 +++++++++++++------ 2 files changed, 23 insertions(+), 6 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py index d391b9bde..a0f4925fc 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py @@ -30,6 +30,7 @@ from templates import get_set_diff_lmul_op_template from templates import misc_op_template from templates import cvt_op_template +from templates import mac_template from constants import LMULS, WLMULS, NCVTLMULS SEWS = [16] @@ -117,6 +118,14 @@ def gen(g): "bf16-vector-widening-convert", ["wcvtbf16"], "bfloat16", SEWS, WLMULS, decorators.has_masking_maskedoff_policy) + #################################################################### + g.start_group("BFloat16 Arithmetic Intrinsics") + + g.function_group(mac_template, + "Vector Widening Multiply-Accumulate Intrinsics", + "bf16-widening-multiply-accumulate", ["wmaccbf16"], TYPES, + SEWS, WLMULS, decorators.has_masking_no_maskedoff_policy_frm) + #################################################################### g.start_group("BFloat16 Miscellaneous Vector Utility Intrinsics") diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py index 0900eda42..da18f5c0a 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py @@ -41,7 +41,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): if "int" in data_type and decorator.flags & ExtraAttr.HAS_FRM: continue - if data_type == "float": + if "float" in data_type: args["S_TYPE"] = "f" args["OP"] = "f" + op inst_type = InstType.VVF @@ -129,14 +129,22 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): rs1=type_helper.s, vs2=type_helper.v, vl=type_helper.size_t) - elif data_type == "float" and "w" in op: + elif "float" in data_type and "w" in op: + # Vector BF16 widening multiply-accumulate computes into FP32 values + if args["TYPE"] == "bfloat": + args["TYPE"] = "float" + dst_type_helper = TypeHelper(**args) + dst_type = dst_type_helper.wv + else: + dst_type = type_helper.wv + G.func( inst_info_vv, name="{OP}_vv_{TYPE}{WSEW}m{WLMUL}".format_map(args) + decorator.func_suffix, - return_type=type_helper.wv, + return_type=dst_type, **decorator.mask_args(type_helper.m, type_helper.v), - vd=type_helper.wv, + vd=dst_type, vs1=type_helper.v, vs2=type_helper.v, **decorator.extra_csr_args(type_helper.uint), @@ -145,9 +153,9 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): inst_info_vs, name="{OP}_v{S_TYPE}_{TYPE}{WSEW}m{WLMUL}".format_map(args) + decorator.func_suffix, - return_type=type_helper.wv, + return_type=dst_type, **decorator.mask_args(type_helper.m, type_helper.v), - vd=type_helper.wv, + vd=dst_type, vs1=type_helper.s, vs2=type_helper.v, **decorator.extra_csr_args(type_helper.uint), From c02ec96feb07f509e6c64c9ccbb52d1517629cae Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 10:55:36 -0700 Subject: [PATCH 010/151] [Auto-gen] Update bfloat16 documents under ../auto-generated. (make git-commit-autogen-bf16-doc) --- auto-generated/bfloat16/intrinsic_funcs.adoc | 129 +++++++++ .../03_bfloat16_arithmetic_intrinsics.adoc | 129 +++++++++ ...cellaneous_vector_utility_intrinsics.adoc} | 0 .../bfloat16/overloaded_intrinsic_funcs.adoc | 113 ++++++++ .../03_bfloat16_arithmetic_intrinsics.adoc | 113 ++++++++ ...cellaneous_vector_utility_intrinsics.adoc} | 0 .../policy_funcs/intrinsic_funcs.adoc | 273 ++++++++++++++++++ .../03_bfloat16_arithmetic_intrinsics.adoc | 273 ++++++++++++++++++ ...cellaneous_vector_utility_intrinsics.adoc} | 0 .../overloaded_intrinsic_funcs.adoc | 232 +++++++++++++++ .../03_bfloat16_arithmetic_intrinsics.adoc | 232 +++++++++++++++ ...cellaneous_vector_utility_intrinsics.adoc} | 0 12 files changed, 1494 insertions(+) create mode 100644 auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc rename auto-generated/bfloat16/intrinsic_funcs/{03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc => 04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc} (100%) create mode 100644 auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc rename auto-generated/bfloat16/overloaded_intrinsic_funcs/{03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc => 04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc} (100%) create mode 100644 auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc rename auto-generated/bfloat16/policy_funcs/intrinsic_funcs/{03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc => 04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc} (100%) create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc rename auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/{03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc => 04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc} (100%) diff --git a/auto-generated/bfloat16/intrinsic_funcs.adoc b/auto-generated/bfloat16/intrinsic_funcs.adoc index f9304847d..08af0f88f 100644 --- a/auto-generated/bfloat16/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs.adoc @@ -1414,6 +1414,135 @@ vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_m(vbool4_t vm, vbfloat16m4_t vs2, size_t vl); ---- +=== BFloat16 Arithmetic Intrinsics + +[[bf16-widening-multiply-accumulate]] +==== Vector Widening Multiply-Accumulate Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc new file mode 100644 index 000000000..830e11a4b --- /dev/null +++ b/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -0,0 +1,129 @@ + +=== BFloat16 Arithmetic Intrinsics + +[[bf16-widening-multiply-accumulate]] +==== Vector Widening Multiply-Accumulate Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +---- diff --git a/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc similarity index 100% rename from auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc rename to auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc index 9692805cf..9d42647ce 100644 --- a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc @@ -1010,6 +1010,119 @@ vfloat32m4_t __riscv_vfwcvtbf16_f(vbool8_t vm, vbfloat16m2_t vs2, size_t vl); vfloat32m8_t __riscv_vfwcvtbf16_f(vbool4_t vm, vbfloat16m4_t vs2, size_t vl); ---- +=== BFloat16 Arithmetic Intrinsics + +[[overloaded-bf16-widening-multiply-accumulate]] +==== Vector Widening Multiply-Accumulate Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwmaccbf16(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[overloaded-reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc new file mode 100644 index 000000000..f62b14fba --- /dev/null +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -0,0 +1,113 @@ + +=== BFloat16 Arithmetic Intrinsics + +[[overloaded-bf16-widening-multiply-accumulate]] +==== Vector Widening Multiply-Accumulate Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwmaccbf16(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +---- diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc similarity index 100% rename from auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc rename to auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc index 78157d29a..37161ceff 100644 --- a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc @@ -2582,6 +2582,279 @@ vfloat32m8_t __riscv_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl); ---- +=== BFloat16 Arithmetic Intrinsics + +[[policy-variant-bf16-widening-multiply-accumulate]] +==== Vector Widening Multiply-Accumulate Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_tu(vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_tu(vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_tu(vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm_tum(vbool64_t vm, + vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm_tum(vbool64_t vm, + vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t +__riscv_vfwmaccbf16_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t +__riscv_vfwmaccbf16_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm_mu(vbool64_t vm, + vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm_mu(vbool64_t vm, + vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[policy-variant-reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc new file mode 100644 index 000000000..15acd4a2c --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -0,0 +1,273 @@ + +=== BFloat16 Arithmetic Intrinsics + +[[policy-variant-bf16-widening-multiply-accumulate]] +==== Vector Widening Multiply-Accumulate Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_tu(vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_tu(vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_tu(vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm_tum(vbool64_t vm, + vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm_tum(vbool64_t vm, + vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t +__riscv_vfwmaccbf16_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t +__riscv_vfwmaccbf16_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_vv_f32mf2_rm_mu(vbool64_t vm, + vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_vf_f32mf2_rm_mu(vbool64_t vm, + vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +---- diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc similarity index 100% rename from auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc rename to auto-generated/bfloat16/policy_funcs/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc index 8f77e40d0..266e06b4c 100644 --- a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc @@ -1837,6 +1837,238 @@ vfloat32m8_t __riscv_vfwcvtbf16_f_mu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl); ---- +=== BFloat16 Arithmetic Intrinsics + +[[policy-variant-overloadedbf16-widening-multiply-accumulate]] +==== Vector Widening Multiply-Accumulate Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwmaccbf16_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tum(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tum(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tum(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tum(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tumu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tumu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tumu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tumu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_mu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_mu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_mu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_mu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tum(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tum(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tum(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tum(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tumu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tumu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tumu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tumu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_mu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_mu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_mu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_mu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[policy-variant-overloadedreinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc new file mode 100644 index 000000000..64c886112 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -0,0 +1,232 @@ + +=== BFloat16 Arithmetic Intrinsics + +[[policy-variant-overloadedbf16-widening-multiply-accumulate]] +==== Vector Widening Multiply-Accumulate Intrinsics + +[,c] +---- +vfloat32mf2_t __riscv_vfwmaccbf16_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tum(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tum(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tum(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tum(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tumu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tumu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tumu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tumu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_mu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_mu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_mu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_mu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tum(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tum(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tum(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tum(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_tumu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_tumu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_tumu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_tumu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +// masked functions +vfloat32mf2_t __riscv_vfwmaccbf16_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, + unsigned int frm, size_t vl); +vfloat32mf2_t __riscv_vfwmaccbf16_mu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, unsigned int frm, + size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + unsigned int frm, size_t vl); +vfloat32m1_t __riscv_vfwmaccbf16_mu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, unsigned int frm, + size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + unsigned int frm, size_t vl); +vfloat32m2_t __riscv_vfwmaccbf16_mu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, unsigned int frm, + size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + unsigned int frm, size_t vl); +vfloat32m4_t __riscv_vfwmaccbf16_mu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, unsigned int frm, + size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + unsigned int frm, size_t vl); +vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, unsigned int frm, + size_t vl); +---- diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc similarity index 100% rename from auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_miscellaneous_vector_utility_intrinsics.adoc rename to auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc From 80a9a2525d10c8879c2a4ab949a3bbf64bccd6e6 Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 10:55:38 -0700 Subject: [PATCH 011/151] [Auto-gen] Update bfloat16 tests under ../auto-generated. (make git-commit-autogen-bf16-test) --- .../bfloat16/api-testing/vfwmaccbf16.c | 233 ++++++++ .../bfloat16/llvm-api-tests/vfwmaccbf16.c | 238 +++++++++ .../llvm-overloaded-tests/vfwmaccbf16.c | 228 ++++++++ .../overloaded-api-testing/vfwmaccbf16.c | 223 ++++++++ .../policy_funcs/api-testing/vfwmaccbf16.c | 496 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vfwmaccbf16.c | 327 ++++++++++++ .../llvm-overloaded-tests/vfwmaccbf16.c | 471 +++++++++++++++++ .../overloaded-api-testing/vfwmaccbf16.c | 466 ++++++++++++++++ 8 files changed, 2682 insertions(+) create mode 100644 auto-generated/bfloat16/api-testing/vfwmaccbf16.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vfwmaccbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vfwmaccbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfwmaccbf16.c diff --git a/auto-generated/bfloat16/api-testing/vfwmaccbf16.c b/auto-generated/bfloat16/api-testing/vfwmaccbf16.c new file mode 100644 index 000000000..5e48e1b89 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vfwmaccbf16.c @@ -0,0 +1,233 @@ +#include +#include + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_m(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_m(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c new file mode 100644 index 000000000..3caecfecf --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c @@ -0,0 +1,238 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_m(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_m(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_m(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_m(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c new file mode 100644 index 000000000..da2042680 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c @@ -0,0 +1,228 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vfwmaccbf16.c b/auto-generated/bfloat16/overloaded-api-testing/vfwmaccbf16.c new file mode 100644 index 000000000..19c317e42 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vfwmaccbf16.c @@ -0,0 +1,223 @@ +#include +#include + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2(vfloat32mf2_t vd, vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_m(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_m(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_m(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_m(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/api-testing/vfwmaccbf16.c new file mode 100644 index 000000000..bc8c20900 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vfwmaccbf16.c @@ -0,0 +1,496 @@ +#include +#include + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_tu(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_tu(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_tu(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_tu(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_tu(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_tu(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_tu(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_tu(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_tu(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_tu(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, + __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, + __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c new file mode 100644 index 000000000..7f553ef2c --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c @@ -0,0 +1,327 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_tu(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tu(vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_tu(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_tu(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tu(vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_tu(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_tu(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tu(vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_tu(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_tu(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tu(vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_tu(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_tu(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tu(vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_tu(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tu(vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tu(vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tu(vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tu(vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tu(vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c new file mode 100644 index 000000000..042be3c8d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c @@ -0,0 +1,471 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfwmaccbf16.c new file mode 100644 index 000000000..c20b7c37d --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfwmaccbf16.c @@ -0,0 +1,466 @@ +#include +#include + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} + +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +} From 58281b48018a452503963faa6db7109d033a587c Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 11:07:21 -0700 Subject: [PATCH 012/151] Add note that specification uses __bf16 to represent scalar BFloat16 types Signed-off-by: eop Chen --- doc/vector-bfloat16-spec.adoc | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/doc/vector-bfloat16-spec.adoc b/doc/vector-bfloat16-spec.adoc index 523577759..2779af9b9 100644 --- a/doc/vector-bfloat16-spec.adoc +++ b/doc/vector-bfloat16-spec.adoc @@ -17,13 +17,15 @@ The BFloat16 intrinsics follows provides the same control of the vector programm Floating-point types have EEW and EMUL encoded into the type. The first row describes the EMUL and the first column describes the data type and element width of the scalar type. -Floating-point types with element widths of 16 (Types=`bfloat16_t`) require the `zfbfmin` and `zvfbfmin` extension to be specified in the architecture. +Floating-point types with element widths of 16 (Types=`__bf16`) require the `zfbfmin` and `zvfbfmin` extension to be specified in the architecture. + +NOTE: Although C++23 introduces `` for fixed-width floating-point types, this latest standard is not yet supported in the upstream RISC-V compiler. The specification (along with the prototype lists in appendix) uses `__bf16` to represent the BFloat16 floating-point type. .BFloat16 types [options="autowidth,header",float="center",align="center",cols="<1,<2,<2,<2,<2,<2,<2,<2"] |=== | Types | EMUL=1/8 | EMUL=1/4 | EMUL=1/ 2 | EMUL=1 | EMUL=2 | EMUL=4 | EMUL=8 -| bfloat16_t | N/A | vbfloat16m4_t | vbfloat16mf2_t | vbfloat16m1_t | vbfloat16m2_t | vbfloat16m4_t | vbfloat16m8_t +| __bf16 | N/A | vbfloat16m4_t | vbfloat16mf2_t | vbfloat16m1_t | vbfloat16m2_t | vbfloat16m4_t | vbfloat16m8_t |=== [[bf16-pseudo-intrinsics]] From cbb88c44a2e897c65ccf4a3d80dafb446fc814f5 Mon Sep 17 00:00:00 2001 From: eopXD Date: Sat, 4 Nov 2023 11:46:59 -0700 Subject: [PATCH 013/151] Add tuple types table for BFloat16 types Signed-off-by: eop Chen --- doc/vector-bfloat16-spec.adoc | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/doc/vector-bfloat16-spec.adoc b/doc/vector-bfloat16-spec.adoc index 2779af9b9..77d041a45 100644 --- a/doc/vector-bfloat16-spec.adoc +++ b/doc/vector-bfloat16-spec.adoc @@ -25,7 +25,18 @@ NOTE: Although C++23 introduces `` for fixed-width floating-point type [options="autowidth,header",float="center",align="center",cols="<1,<2,<2,<2,<2,<2,<2,<2"] |=== | Types | EMUL=1/8 | EMUL=1/4 | EMUL=1/ 2 | EMUL=1 | EMUL=2 | EMUL=4 | EMUL=8 -| __bf16 | N/A | vbfloat16m4_t | vbfloat16mf2_t | vbfloat16m1_t | vbfloat16m2_t | vbfloat16m4_t | vbfloat16m8_t +| __bf16 | N/A | vbfloat16mf4_t | vbfloat16mf2_t | vbfloat16m1_t | vbfloat16m2_t | vbfloat16m4_t | vbfloat16m8_t +|=== + +.Tuple types +[options="autowidth,header",float="center",align="center",cols="<1,<2,<2,<2,<2,<2,<2,<2"] +|=== +| Non-tuple Types (NFILED=1)| NFIELD=2 | NFIELD=3 | NFIELD=4 | NFIELD=5 | NFIELD=6 | NFIELD=7 | NFIELD=8 +| vbfloat16mf4_t | vbfloat16mf4x2_t | vbfloat16mf4x3_t | vbfloat16mf4x4_t | vbfloat16mf4x5_t | vbfloat16mf4x6_t | vbfloat16mf4x7_t | vbfloat16mf4x8_t +| vbfloat16mf2_t | vbfloat16mf2x2_t | vbfloat16mf2x3_t | vbfloat16mf2x4_t | vbfloat16mf2x5_t | vbfloat16mf2x6_t | vbfloat16mf2x7_t | vbfloat16mf2x8_t +| vbfloat16m1_t | vbfloat16m1x2_t | vbfloat16m1x3_t | vbfloat16m1x4_t | vbfloat16m1x5_t | vbfloat16m1x6_t | vbfloat16m1x7_t | vbfloat16m1x8_t +| vbfloat16m2_t | vbfloat16m2x2_t | vbfloat16m2x3_t | vbfloat16m2x4_t | N/A | N/A | N/A | N/A +| vbfloat16m4_t | vbfloat16m4x2_t | N/A | N/A | N/A | N/A | N/A | N/A |=== [[bf16-pseudo-intrinsics]] From 9206f82526e071afc07cd7cbd5a5f5631d29089a Mon Sep 17 00:00:00 2001 From: eopXD Date: Fri, 10 Nov 2023 16:54:09 -0800 Subject: [PATCH 014/151] Fix type abbreviation in reinterpret intrinsics for bfloat16 Signed-off-by: eop Chen --- .../rvv_intrinsic_gen/templates/reint_op_template.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py index 987f48b63..452cec078 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py @@ -39,9 +39,9 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): # [dst type, dst short type, src type, src short type] if type_list == "bfloat16": convert_set = [["bfloat", "bf", "int", - "i"], ["bfloat", "bf", "uint", "ui"], + "i"], ["bfloat", "bf", "uint", "u"], ["int", "i", "bfloat", "bf"], - ["uint", "ui", "bfloat", "bf"]] + ["uint", "u", "bfloat", "bf"]] else: convert_set = [["float", "f", "int", "i"], ["float", "f", "uint", "u"], ["uint", "u", "int", "i"], ["int", "i", "uint", "u"], From d851bb573d2b9c8b82d1cdc95b25730014d83b05 Mon Sep 17 00:00:00 2001 From: eopXD Date: Fri, 10 Nov 2023 16:54:22 -0800 Subject: [PATCH 015/151] [Auto-gen] Update bfloat16 documents under ../auto-generated. (make git-commit-autogen-bf16-doc) --- auto-generated/bfloat16/intrinsic_funcs.adoc | 24 +++++++++---------- ...scellaneous_vector_utility_intrinsics.adoc | 24 +++++++++---------- .../bfloat16/overloaded_intrinsic_funcs.adoc | 12 +++++----- ...scellaneous_vector_utility_intrinsics.adoc | 12 +++++----- 4 files changed, 36 insertions(+), 36 deletions(-) diff --git a/auto-generated/bfloat16/intrinsic_funcs.adoc b/auto-generated/bfloat16/intrinsic_funcs.adoc index 08af0f88f..ab7d0febb 100644 --- a/auto-generated/bfloat16/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs.adoc @@ -1557,24 +1557,24 @@ vbfloat16m1_t __riscv_vreinterpret_v_i16m1_bf16m1(vint16m1_t src); vbfloat16m2_t __riscv_vreinterpret_v_i16m2_bf16m2(vint16m2_t src); vbfloat16m4_t __riscv_vreinterpret_v_i16m4_bf16m4(vint16m4_t src); vbfloat16m8_t __riscv_vreinterpret_v_i16m8_bf16m8(vint16m8_t src); -vbfloat16mf4_t __riscv_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src); -vbfloat16mf2_t __riscv_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src); -vbfloat16m1_t __riscv_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src); -vbfloat16m2_t __riscv_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src); -vbfloat16m4_t __riscv_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src); -vbfloat16m8_t __riscv_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src); +vbfloat16mf4_t __riscv_vreinterpret_v_u16mf4_bf16mf4(vuint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_v_u16mf2_bf16mf2(vuint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_v_u16m1_bf16m1(vuint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_v_u16m2_bf16m2(vuint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_v_u16m4_bf16m4(vuint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_v_u16m8_bf16m8(vuint16m8_t src); vint16mf4_t __riscv_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src); vint16mf2_t __riscv_vreinterpret_v_bf16mf2_i16mf2(vbfloat16mf2_t src); vint16m1_t __riscv_vreinterpret_v_bf16m1_i16m1(vbfloat16m1_t src); vint16m2_t __riscv_vreinterpret_v_bf16m2_i16m2(vbfloat16m2_t src); vint16m4_t __riscv_vreinterpret_v_bf16m4_i16m4(vbfloat16m4_t src); vint16m8_t __riscv_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src); -vuint16mf4_t __riscv_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src); -vuint16mf2_t __riscv_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src); -vuint16m1_t __riscv_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src); -vuint16m2_t __riscv_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src); -vuint16m4_t __riscv_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src); -vuint16m8_t __riscv_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src); +vuint16mf4_t __riscv_vreinterpret_v_bf16mf4_u16mf4(vbfloat16mf4_t src); +vuint16mf2_t __riscv_vreinterpret_v_bf16mf2_u16mf2(vbfloat16mf2_t src); +vuint16m1_t __riscv_vreinterpret_v_bf16m1_u16m1(vbfloat16m1_t src); +vuint16m2_t __riscv_vreinterpret_v_bf16m2_u16m2(vbfloat16m2_t src); +vuint16m4_t __riscv_vreinterpret_v_bf16m4_u16m4(vbfloat16m4_t src); +vuint16m8_t __riscv_vreinterpret_v_bf16m8_u16m8(vbfloat16m8_t src); ---- [[vector-lmul-extensionn]] diff --git a/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc index 5c8c2a665..ddbf93b7f 100644 --- a/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc @@ -13,24 +13,24 @@ vbfloat16m1_t __riscv_vreinterpret_v_i16m1_bf16m1(vint16m1_t src); vbfloat16m2_t __riscv_vreinterpret_v_i16m2_bf16m2(vint16m2_t src); vbfloat16m4_t __riscv_vreinterpret_v_i16m4_bf16m4(vint16m4_t src); vbfloat16m8_t __riscv_vreinterpret_v_i16m8_bf16m8(vint16m8_t src); -vbfloat16mf4_t __riscv_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src); -vbfloat16mf2_t __riscv_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src); -vbfloat16m1_t __riscv_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src); -vbfloat16m2_t __riscv_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src); -vbfloat16m4_t __riscv_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src); -vbfloat16m8_t __riscv_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src); +vbfloat16mf4_t __riscv_vreinterpret_v_u16mf4_bf16mf4(vuint16mf4_t src); +vbfloat16mf2_t __riscv_vreinterpret_v_u16mf2_bf16mf2(vuint16mf2_t src); +vbfloat16m1_t __riscv_vreinterpret_v_u16m1_bf16m1(vuint16m1_t src); +vbfloat16m2_t __riscv_vreinterpret_v_u16m2_bf16m2(vuint16m2_t src); +vbfloat16m4_t __riscv_vreinterpret_v_u16m4_bf16m4(vuint16m4_t src); +vbfloat16m8_t __riscv_vreinterpret_v_u16m8_bf16m8(vuint16m8_t src); vint16mf4_t __riscv_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src); vint16mf2_t __riscv_vreinterpret_v_bf16mf2_i16mf2(vbfloat16mf2_t src); vint16m1_t __riscv_vreinterpret_v_bf16m1_i16m1(vbfloat16m1_t src); vint16m2_t __riscv_vreinterpret_v_bf16m2_i16m2(vbfloat16m2_t src); vint16m4_t __riscv_vreinterpret_v_bf16m4_i16m4(vbfloat16m4_t src); vint16m8_t __riscv_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src); -vuint16mf4_t __riscv_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src); -vuint16mf2_t __riscv_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src); -vuint16m1_t __riscv_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src); -vuint16m2_t __riscv_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src); -vuint16m4_t __riscv_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src); -vuint16m8_t __riscv_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src); +vuint16mf4_t __riscv_vreinterpret_v_bf16mf4_u16mf4(vbfloat16mf4_t src); +vuint16mf2_t __riscv_vreinterpret_v_bf16mf2_u16mf2(vbfloat16mf2_t src); +vuint16m1_t __riscv_vreinterpret_v_bf16m1_u16m1(vbfloat16m1_t src); +vuint16m2_t __riscv_vreinterpret_v_bf16m2_u16m2(vbfloat16m2_t src); +vuint16m4_t __riscv_vreinterpret_v_bf16m4_u16m4(vbfloat16m4_t src); +vuint16m8_t __riscv_vreinterpret_v_bf16m8_u16m8(vbfloat16m8_t src); ---- [[vector-lmul-extensionn]] diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc index 9d42647ce..78326373a 100644 --- a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc @@ -1149,12 +1149,12 @@ vint16m1_t __riscv_vreinterpret_i16m1(vbfloat16m1_t src); vint16m2_t __riscv_vreinterpret_i16m2(vbfloat16m2_t src); vint16m4_t __riscv_vreinterpret_i16m4(vbfloat16m4_t src); vint16m8_t __riscv_vreinterpret_i16m8(vbfloat16m8_t src); -vuint16mf4_t __riscv_vreinterpret_ui16mf4(vbfloat16mf4_t src); -vuint16mf2_t __riscv_vreinterpret_ui16mf2(vbfloat16mf2_t src); -vuint16m1_t __riscv_vreinterpret_ui16m1(vbfloat16m1_t src); -vuint16m2_t __riscv_vreinterpret_ui16m2(vbfloat16m2_t src); -vuint16m4_t __riscv_vreinterpret_ui16m4(vbfloat16m4_t src); -vuint16m8_t __riscv_vreinterpret_ui16m8(vbfloat16m8_t src); +vuint16mf4_t __riscv_vreinterpret_u16mf4(vbfloat16mf4_t src); +vuint16mf2_t __riscv_vreinterpret_u16mf2(vbfloat16mf2_t src); +vuint16m1_t __riscv_vreinterpret_u16m1(vbfloat16m1_t src); +vuint16m2_t __riscv_vreinterpret_u16m2(vbfloat16m2_t src); +vuint16m4_t __riscv_vreinterpret_u16m4(vbfloat16m4_t src); +vuint16m8_t __riscv_vreinterpret_u16m8(vbfloat16m8_t src); ---- [[overloaded-vector-lmul-extensionn]] diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc index e0557f220..f06c83b9e 100644 --- a/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc @@ -25,12 +25,12 @@ vint16m1_t __riscv_vreinterpret_i16m1(vbfloat16m1_t src); vint16m2_t __riscv_vreinterpret_i16m2(vbfloat16m2_t src); vint16m4_t __riscv_vreinterpret_i16m4(vbfloat16m4_t src); vint16m8_t __riscv_vreinterpret_i16m8(vbfloat16m8_t src); -vuint16mf4_t __riscv_vreinterpret_ui16mf4(vbfloat16mf4_t src); -vuint16mf2_t __riscv_vreinterpret_ui16mf2(vbfloat16mf2_t src); -vuint16m1_t __riscv_vreinterpret_ui16m1(vbfloat16m1_t src); -vuint16m2_t __riscv_vreinterpret_ui16m2(vbfloat16m2_t src); -vuint16m4_t __riscv_vreinterpret_ui16m4(vbfloat16m4_t src); -vuint16m8_t __riscv_vreinterpret_ui16m8(vbfloat16m8_t src); +vuint16mf4_t __riscv_vreinterpret_u16mf4(vbfloat16mf4_t src); +vuint16mf2_t __riscv_vreinterpret_u16mf2(vbfloat16mf2_t src); +vuint16m1_t __riscv_vreinterpret_u16m1(vbfloat16m1_t src); +vuint16m2_t __riscv_vreinterpret_u16m2(vbfloat16m2_t src); +vuint16m4_t __riscv_vreinterpret_u16m4(vbfloat16m4_t src); +vuint16m8_t __riscv_vreinterpret_u16m8(vbfloat16m8_t src); ---- [[overloaded-vector-lmul-extensionn]] From 9d22b93a7a600bd363f24b7b9f57d8d7c85387f7 Mon Sep 17 00:00:00 2001 From: eopXD Date: Fri, 10 Nov 2023 16:54:24 -0800 Subject: [PATCH 016/151] [Auto-gen] Update bfloat16 tests under ../auto-generated. (make git-commit-autogen-bf16-test) --- .../bfloat16/api-testing/vreinterpret.c | 48 +++++++++---------- .../bfloat16/llvm-api-tests/vreinterpret.c | 48 +++++++++---------- .../llvm-overloaded-tests/vreinterpret.c | 36 +++++++------- .../overloaded-api-testing/vreinterpret.c | 36 +++++++------- 4 files changed, 84 insertions(+), 84 deletions(-) diff --git a/auto-generated/bfloat16/api-testing/vreinterpret.c b/auto-generated/bfloat16/api-testing/vreinterpret.c index 64576fffa..44975a392 100644 --- a/auto-generated/bfloat16/api-testing/vreinterpret.c +++ b/auto-generated/bfloat16/api-testing/vreinterpret.c @@ -25,28 +25,28 @@ vbfloat16m8_t test_vreinterpret_v_i16m8_bf16m8(vint16m8_t src) { return __riscv_vreinterpret_v_i16m8_bf16m8(src); } -vbfloat16mf4_t test_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src) { - return __riscv_vreinterpret_v_ui16mf4_bf16mf4(src); +vbfloat16mf4_t test_vreinterpret_v_u16mf4_bf16mf4(vuint16mf4_t src) { + return __riscv_vreinterpret_v_u16mf4_bf16mf4(src); } -vbfloat16mf2_t test_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src) { - return __riscv_vreinterpret_v_ui16mf2_bf16mf2(src); +vbfloat16mf2_t test_vreinterpret_v_u16mf2_bf16mf2(vuint16mf2_t src) { + return __riscv_vreinterpret_v_u16mf2_bf16mf2(src); } -vbfloat16m1_t test_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src) { - return __riscv_vreinterpret_v_ui16m1_bf16m1(src); +vbfloat16m1_t test_vreinterpret_v_u16m1_bf16m1(vuint16m1_t src) { + return __riscv_vreinterpret_v_u16m1_bf16m1(src); } -vbfloat16m2_t test_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src) { - return __riscv_vreinterpret_v_ui16m2_bf16m2(src); +vbfloat16m2_t test_vreinterpret_v_u16m2_bf16m2(vuint16m2_t src) { + return __riscv_vreinterpret_v_u16m2_bf16m2(src); } -vbfloat16m4_t test_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src) { - return __riscv_vreinterpret_v_ui16m4_bf16m4(src); +vbfloat16m4_t test_vreinterpret_v_u16m4_bf16m4(vuint16m4_t src) { + return __riscv_vreinterpret_v_u16m4_bf16m4(src); } -vbfloat16m8_t test_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src) { - return __riscv_vreinterpret_v_ui16m8_bf16m8(src); +vbfloat16m8_t test_vreinterpret_v_u16m8_bf16m8(vuint16m8_t src) { + return __riscv_vreinterpret_v_u16m8_bf16m8(src); } vint16mf4_t test_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src) { @@ -73,26 +73,26 @@ vint16m8_t test_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src) { return __riscv_vreinterpret_v_bf16m8_i16m8(src); } -vuint16mf4_t test_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src) { - return __riscv_vreinterpret_v_bf16mf4_ui16mf4(src); +vuint16mf4_t test_vreinterpret_v_bf16mf4_u16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_v_bf16mf4_u16mf4(src); } -vuint16mf2_t test_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src) { - return __riscv_vreinterpret_v_bf16mf2_ui16mf2(src); +vuint16mf2_t test_vreinterpret_v_bf16mf2_u16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_v_bf16mf2_u16mf2(src); } -vuint16m1_t test_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src) { - return __riscv_vreinterpret_v_bf16m1_ui16m1(src); +vuint16m1_t test_vreinterpret_v_bf16m1_u16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_v_bf16m1_u16m1(src); } -vuint16m2_t test_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src) { - return __riscv_vreinterpret_v_bf16m2_ui16m2(src); +vuint16m2_t test_vreinterpret_v_bf16m2_u16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_v_bf16m2_u16m2(src); } -vuint16m4_t test_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src) { - return __riscv_vreinterpret_v_bf16m4_ui16m4(src); +vuint16m4_t test_vreinterpret_v_bf16m4_u16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_v_bf16m4_u16m4(src); } -vuint16m8_t test_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src) { - return __riscv_vreinterpret_v_bf16m8_ui16m8(src); +vuint16m8_t test_vreinterpret_v_bf16m8_u16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_v_bf16m8_u16m8(src); } diff --git a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c index fbd501fa3..1921103df 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c +++ b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c @@ -30,28 +30,28 @@ vbfloat16m8_t test_vreinterpret_v_i16m8_bf16m8(vint16m8_t src) { return __riscv_vreinterpret_v_i16m8_bf16m8(src); } -vbfloat16mf4_t test_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src) { - return __riscv_vreinterpret_v_ui16mf4_bf16mf4(src); +vbfloat16mf4_t test_vreinterpret_v_u16mf4_bf16mf4(vuint16mf4_t src) { + return __riscv_vreinterpret_v_u16mf4_bf16mf4(src); } -vbfloat16mf2_t test_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src) { - return __riscv_vreinterpret_v_ui16mf2_bf16mf2(src); +vbfloat16mf2_t test_vreinterpret_v_u16mf2_bf16mf2(vuint16mf2_t src) { + return __riscv_vreinterpret_v_u16mf2_bf16mf2(src); } -vbfloat16m1_t test_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src) { - return __riscv_vreinterpret_v_ui16m1_bf16m1(src); +vbfloat16m1_t test_vreinterpret_v_u16m1_bf16m1(vuint16m1_t src) { + return __riscv_vreinterpret_v_u16m1_bf16m1(src); } -vbfloat16m2_t test_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src) { - return __riscv_vreinterpret_v_ui16m2_bf16m2(src); +vbfloat16m2_t test_vreinterpret_v_u16m2_bf16m2(vuint16m2_t src) { + return __riscv_vreinterpret_v_u16m2_bf16m2(src); } -vbfloat16m4_t test_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src) { - return __riscv_vreinterpret_v_ui16m4_bf16m4(src); +vbfloat16m4_t test_vreinterpret_v_u16m4_bf16m4(vuint16m4_t src) { + return __riscv_vreinterpret_v_u16m4_bf16m4(src); } -vbfloat16m8_t test_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src) { - return __riscv_vreinterpret_v_ui16m8_bf16m8(src); +vbfloat16m8_t test_vreinterpret_v_u16m8_bf16m8(vuint16m8_t src) { + return __riscv_vreinterpret_v_u16m8_bf16m8(src); } vint16mf4_t test_vreinterpret_v_bf16mf4_i16mf4(vbfloat16mf4_t src) { @@ -78,26 +78,26 @@ vint16m8_t test_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src) { return __riscv_vreinterpret_v_bf16m8_i16m8(src); } -vuint16mf4_t test_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src) { - return __riscv_vreinterpret_v_bf16mf4_ui16mf4(src); +vuint16mf4_t test_vreinterpret_v_bf16mf4_u16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_v_bf16mf4_u16mf4(src); } -vuint16mf2_t test_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src) { - return __riscv_vreinterpret_v_bf16mf2_ui16mf2(src); +vuint16mf2_t test_vreinterpret_v_bf16mf2_u16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_v_bf16mf2_u16mf2(src); } -vuint16m1_t test_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src) { - return __riscv_vreinterpret_v_bf16m1_ui16m1(src); +vuint16m1_t test_vreinterpret_v_bf16m1_u16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_v_bf16m1_u16m1(src); } -vuint16m2_t test_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src) { - return __riscv_vreinterpret_v_bf16m2_ui16m2(src); +vuint16m2_t test_vreinterpret_v_bf16m2_u16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_v_bf16m2_u16m2(src); } -vuint16m4_t test_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src) { - return __riscv_vreinterpret_v_bf16m4_ui16m4(src); +vuint16m4_t test_vreinterpret_v_bf16m4_u16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_v_bf16m4_u16m4(src); } -vuint16m8_t test_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src) { - return __riscv_vreinterpret_v_bf16m8_ui16m8(src); +vuint16m8_t test_vreinterpret_v_bf16m8_u16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_v_bf16m8_u16m8(src); } diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c index 1ea482fca..fc27aafcc 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c @@ -30,27 +30,27 @@ vbfloat16m8_t test_vreinterpret_v_i16m8_bf16m8(vint16m8_t src) { return __riscv_vreinterpret_bf16m8(src); } -vbfloat16mf4_t test_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src) { +vbfloat16mf4_t test_vreinterpret_v_u16mf4_bf16mf4(vuint16mf4_t src) { return __riscv_vreinterpret_bf16mf4(src); } -vbfloat16mf2_t test_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src) { +vbfloat16mf2_t test_vreinterpret_v_u16mf2_bf16mf2(vuint16mf2_t src) { return __riscv_vreinterpret_bf16mf2(src); } -vbfloat16m1_t test_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src) { +vbfloat16m1_t test_vreinterpret_v_u16m1_bf16m1(vuint16m1_t src) { return __riscv_vreinterpret_bf16m1(src); } -vbfloat16m2_t test_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src) { +vbfloat16m2_t test_vreinterpret_v_u16m2_bf16m2(vuint16m2_t src) { return __riscv_vreinterpret_bf16m2(src); } -vbfloat16m4_t test_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src) { +vbfloat16m4_t test_vreinterpret_v_u16m4_bf16m4(vuint16m4_t src) { return __riscv_vreinterpret_bf16m4(src); } -vbfloat16m8_t test_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src) { +vbfloat16m8_t test_vreinterpret_v_u16m8_bf16m8(vuint16m8_t src) { return __riscv_vreinterpret_bf16m8(src); } @@ -78,26 +78,26 @@ vint16m8_t test_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src) { return __riscv_vreinterpret_i16m8(src); } -vuint16mf4_t test_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src) { - return __riscv_vreinterpret_ui16mf4(src); +vuint16mf4_t test_vreinterpret_v_bf16mf4_u16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_u16mf4(src); } -vuint16mf2_t test_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src) { - return __riscv_vreinterpret_ui16mf2(src); +vuint16mf2_t test_vreinterpret_v_bf16mf2_u16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_u16mf2(src); } -vuint16m1_t test_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src) { - return __riscv_vreinterpret_ui16m1(src); +vuint16m1_t test_vreinterpret_v_bf16m1_u16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_u16m1(src); } -vuint16m2_t test_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src) { - return __riscv_vreinterpret_ui16m2(src); +vuint16m2_t test_vreinterpret_v_bf16m2_u16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_u16m2(src); } -vuint16m4_t test_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src) { - return __riscv_vreinterpret_ui16m4(src); +vuint16m4_t test_vreinterpret_v_bf16m4_u16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_u16m4(src); } -vuint16m8_t test_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src) { - return __riscv_vreinterpret_ui16m8(src); +vuint16m8_t test_vreinterpret_v_bf16m8_u16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_u16m8(src); } diff --git a/auto-generated/bfloat16/overloaded-api-testing/vreinterpret.c b/auto-generated/bfloat16/overloaded-api-testing/vreinterpret.c index 61f031c7d..457fecd65 100644 --- a/auto-generated/bfloat16/overloaded-api-testing/vreinterpret.c +++ b/auto-generated/bfloat16/overloaded-api-testing/vreinterpret.c @@ -25,27 +25,27 @@ vbfloat16m8_t test_vreinterpret_v_i16m8_bf16m8(vint16m8_t src) { return __riscv_vreinterpret_bf16m8(src); } -vbfloat16mf4_t test_vreinterpret_v_ui16mf4_bf16mf4(vuint16mf4_t src) { +vbfloat16mf4_t test_vreinterpret_v_u16mf4_bf16mf4(vuint16mf4_t src) { return __riscv_vreinterpret_bf16mf4(src); } -vbfloat16mf2_t test_vreinterpret_v_ui16mf2_bf16mf2(vuint16mf2_t src) { +vbfloat16mf2_t test_vreinterpret_v_u16mf2_bf16mf2(vuint16mf2_t src) { return __riscv_vreinterpret_bf16mf2(src); } -vbfloat16m1_t test_vreinterpret_v_ui16m1_bf16m1(vuint16m1_t src) { +vbfloat16m1_t test_vreinterpret_v_u16m1_bf16m1(vuint16m1_t src) { return __riscv_vreinterpret_bf16m1(src); } -vbfloat16m2_t test_vreinterpret_v_ui16m2_bf16m2(vuint16m2_t src) { +vbfloat16m2_t test_vreinterpret_v_u16m2_bf16m2(vuint16m2_t src) { return __riscv_vreinterpret_bf16m2(src); } -vbfloat16m4_t test_vreinterpret_v_ui16m4_bf16m4(vuint16m4_t src) { +vbfloat16m4_t test_vreinterpret_v_u16m4_bf16m4(vuint16m4_t src) { return __riscv_vreinterpret_bf16m4(src); } -vbfloat16m8_t test_vreinterpret_v_ui16m8_bf16m8(vuint16m8_t src) { +vbfloat16m8_t test_vreinterpret_v_u16m8_bf16m8(vuint16m8_t src) { return __riscv_vreinterpret_bf16m8(src); } @@ -73,26 +73,26 @@ vint16m8_t test_vreinterpret_v_bf16m8_i16m8(vbfloat16m8_t src) { return __riscv_vreinterpret_i16m8(src); } -vuint16mf4_t test_vreinterpret_v_bf16mf4_ui16mf4(vbfloat16mf4_t src) { - return __riscv_vreinterpret_ui16mf4(src); +vuint16mf4_t test_vreinterpret_v_bf16mf4_u16mf4(vbfloat16mf4_t src) { + return __riscv_vreinterpret_u16mf4(src); } -vuint16mf2_t test_vreinterpret_v_bf16mf2_ui16mf2(vbfloat16mf2_t src) { - return __riscv_vreinterpret_ui16mf2(src); +vuint16mf2_t test_vreinterpret_v_bf16mf2_u16mf2(vbfloat16mf2_t src) { + return __riscv_vreinterpret_u16mf2(src); } -vuint16m1_t test_vreinterpret_v_bf16m1_ui16m1(vbfloat16m1_t src) { - return __riscv_vreinterpret_ui16m1(src); +vuint16m1_t test_vreinterpret_v_bf16m1_u16m1(vbfloat16m1_t src) { + return __riscv_vreinterpret_u16m1(src); } -vuint16m2_t test_vreinterpret_v_bf16m2_ui16m2(vbfloat16m2_t src) { - return __riscv_vreinterpret_ui16m2(src); +vuint16m2_t test_vreinterpret_v_bf16m2_u16m2(vbfloat16m2_t src) { + return __riscv_vreinterpret_u16m2(src); } -vuint16m4_t test_vreinterpret_v_bf16m4_ui16m4(vbfloat16m4_t src) { - return __riscv_vreinterpret_ui16m4(src); +vuint16m4_t test_vreinterpret_v_bf16m4_u16m4(vbfloat16m4_t src) { + return __riscv_vreinterpret_u16m4(src); } -vuint16m8_t test_vreinterpret_v_bf16m8_ui16m8(vbfloat16m8_t src) { - return __riscv_vreinterpret_ui16m8(src); +vuint16m8_t test_vreinterpret_v_bf16m8_u16m8(vbfloat16m8_t src) { + return __riscv_vreinterpret_u16m8(src); } From 53103d6412f06821ac5cc4f5ab577d9eaa39374d Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Tue, 23 Apr 2024 08:46:16 -0700 Subject: [PATCH 017/151] Support bfloat16 in floating point test case header --- .../bfloat16/llvm-api-tests/vcreate.c | 5 ++++- .../bfloat16/llvm-api-tests/vfncvtbf16.c | 4 +++- .../bfloat16/llvm-api-tests/vfwcvtbf16.c | 4 +++- .../bfloat16/llvm-api-tests/vfwmaccbf16.c | 4 +++- auto-generated/bfloat16/llvm-api-tests/vget.c | 4 +++- auto-generated/bfloat16/llvm-api-tests/vle16.c | 4 +++- .../bfloat16/llvm-api-tests/vle16ff.c | 4 +++- .../bfloat16/llvm-api-tests/vloxei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vloxseg2ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vloxseg3ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vloxseg4ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vloxseg5ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vloxseg6ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vloxseg7ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vloxseg8ei16.c | 5 ++++- auto-generated/bfloat16/llvm-api-tests/vlse16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlseg2e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlseg2e16ff.c | 4 +++- .../bfloat16/llvm-api-tests/vlseg3e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlseg3e16ff.c | 4 +++- .../bfloat16/llvm-api-tests/vlseg4e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlseg4e16ff.c | 4 +++- .../bfloat16/llvm-api-tests/vlseg5e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlseg5e16ff.c | 4 +++- .../bfloat16/llvm-api-tests/vlseg6e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlseg6e16ff.c | 4 +++- .../bfloat16/llvm-api-tests/vlseg7e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlseg7e16ff.c | 4 +++- .../bfloat16/llvm-api-tests/vlseg8e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlseg8e16ff.c | 4 +++- .../bfloat16/llvm-api-tests/vlsseg2e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlsseg3e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlsseg4e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlsseg5e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlsseg6e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlsseg7e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vlsseg8e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vluxei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vluxseg2ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vluxseg3ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vluxseg4ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vluxseg5ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vluxseg6ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vluxseg7ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vluxseg8ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vreinterpret.c | 4 +++- auto-generated/bfloat16/llvm-api-tests/vse16.c | 5 ++++- auto-generated/bfloat16/llvm-api-tests/vset.c | 5 ++++- .../bfloat16/llvm-api-tests/vsoxei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsoxseg2ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsoxseg3ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsoxseg4ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsoxseg5ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsoxseg6ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsoxseg7ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsoxseg8ei16.c | 5 ++++- auto-generated/bfloat16/llvm-api-tests/vsse16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsseg2e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsseg3e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsseg4e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsseg5e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsseg6e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsseg7e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsseg8e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vssseg2e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vssseg3e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vssseg4e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vssseg5e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vssseg6e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vssseg7e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vssseg8e16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsuxei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsuxseg2ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsuxseg3ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsuxseg4ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsuxseg5ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsuxseg6ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsuxseg7ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vsuxseg8ei16.c | 5 ++++- .../bfloat16/llvm-api-tests/vundefined.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vfncvtbf16.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vfwcvtbf16.c | 4 +++- .../llvm-overloaded-tests/vfwmaccbf16.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vget.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vle16.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vle16ff.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vloxei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg2ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg3ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg4ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg5ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg6ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg7ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg8ei16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vlse16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vlseg2e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg2e16ff.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vlseg3e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg3e16ff.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vlseg4e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg4e16ff.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vlseg5e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg5e16ff.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vlseg6e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg6e16ff.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vlseg7e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg7e16ff.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vlseg8e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg8e16ff.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vlsseg2e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vlsseg3e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vlsseg4e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vlsseg5e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vlsseg6e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vlsseg7e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vlsseg8e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vluxei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg2ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg3ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg4ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg5ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg6ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg7ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg8ei16.c | 5 ++++- .../llvm-overloaded-tests/vreinterpret.c | 4 +++- .../bfloat16/llvm-overloaded-tests/vse16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vset.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsoxei16.c | 5 ++++- .../llvm-overloaded-tests/vsoxseg2ei16.c | 5 ++++- .../llvm-overloaded-tests/vsoxseg3ei16.c | 5 ++++- .../llvm-overloaded-tests/vsoxseg4ei16.c | 5 ++++- .../llvm-overloaded-tests/vsoxseg5ei16.c | 5 ++++- .../llvm-overloaded-tests/vsoxseg6ei16.c | 5 ++++- .../llvm-overloaded-tests/vsoxseg7ei16.c | 5 ++++- .../llvm-overloaded-tests/vsoxseg8ei16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsse16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsseg2e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsseg3e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsseg4e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsseg5e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsseg6e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsseg7e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsseg8e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vssseg2e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vssseg3e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vssseg4e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vssseg5e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vssseg6e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vssseg7e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vssseg8e16.c | 5 ++++- .../bfloat16/llvm-overloaded-tests/vsuxei16.c | 5 ++++- .../llvm-overloaded-tests/vsuxseg2ei16.c | 5 ++++- .../llvm-overloaded-tests/vsuxseg3ei16.c | 5 ++++- .../llvm-overloaded-tests/vsuxseg4ei16.c | 5 ++++- .../llvm-overloaded-tests/vsuxseg5ei16.c | 5 ++++- .../llvm-overloaded-tests/vsuxseg6ei16.c | 5 ++++- .../llvm-overloaded-tests/vsuxseg7ei16.c | 5 ++++- .../llvm-overloaded-tests/vsuxseg8ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vfncvtbf16.c | 4 +++- .../policy_funcs/llvm-api-tests/vfwcvtbf16.c | 4 +++- .../policy_funcs/llvm-api-tests/vfwmaccbf16.c | 4 +++- .../policy_funcs/llvm-api-tests/vle16.c | 4 +++- .../policy_funcs/llvm-api-tests/vle16ff.c | 4 +++- .../policy_funcs/llvm-api-tests/vloxei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vloxseg2ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vloxseg3ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vloxseg4ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vloxseg5ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vloxseg6ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vloxseg7ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vloxseg8ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlse16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlseg2e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlseg2e16ff.c | 4 +++- .../policy_funcs/llvm-api-tests/vlseg3e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlseg3e16ff.c | 4 +++- .../policy_funcs/llvm-api-tests/vlseg4e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlseg4e16ff.c | 4 +++- .../policy_funcs/llvm-api-tests/vlseg5e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlseg5e16ff.c | 4 +++- .../policy_funcs/llvm-api-tests/vlseg6e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlseg6e16ff.c | 4 +++- .../policy_funcs/llvm-api-tests/vlseg7e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlseg7e16ff.c | 4 +++- .../policy_funcs/llvm-api-tests/vlseg8e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlseg8e16ff.c | 4 +++- .../policy_funcs/llvm-api-tests/vlsseg2e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlsseg3e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlsseg4e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlsseg5e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlsseg6e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlsseg7e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vlsseg8e16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vluxei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vluxseg2ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vluxseg3ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vluxseg4ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vluxseg5ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vluxseg6ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vluxseg7ei16.c | 5 ++++- .../policy_funcs/llvm-api-tests/vluxseg8ei16.c | 5 ++++- .../llvm-overloaded-tests/vfncvtbf16.c | 4 +++- .../llvm-overloaded-tests/vfwcvtbf16.c | 4 +++- .../llvm-overloaded-tests/vfwmaccbf16.c | 4 +++- .../policy_funcs/llvm-overloaded-tests/vle16.c | 4 +++- .../llvm-overloaded-tests/vle16ff.c | 4 +++- .../llvm-overloaded-tests/vloxei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg2ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg3ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg4ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg5ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg6ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg7ei16.c | 5 ++++- .../llvm-overloaded-tests/vloxseg8ei16.c | 5 ++++- .../policy_funcs/llvm-overloaded-tests/vlse16.c | 5 ++++- .../llvm-overloaded-tests/vlseg2e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg2e16ff.c | 4 +++- .../llvm-overloaded-tests/vlseg3e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg3e16ff.c | 4 +++- .../llvm-overloaded-tests/vlseg4e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg4e16ff.c | 4 +++- .../llvm-overloaded-tests/vlseg5e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg5e16ff.c | 4 +++- .../llvm-overloaded-tests/vlseg6e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg6e16ff.c | 4 +++- .../llvm-overloaded-tests/vlseg7e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg7e16ff.c | 4 +++- .../llvm-overloaded-tests/vlseg8e16.c | 5 ++++- .../llvm-overloaded-tests/vlseg8e16ff.c | 4 +++- .../llvm-overloaded-tests/vlsseg2e16.c | 5 ++++- .../llvm-overloaded-tests/vlsseg3e16.c | 5 ++++- .../llvm-overloaded-tests/vlsseg4e16.c | 5 ++++- .../llvm-overloaded-tests/vlsseg5e16.c | 5 ++++- .../llvm-overloaded-tests/vlsseg6e16.c | 5 ++++- .../llvm-overloaded-tests/vlsseg7e16.c | 5 ++++- .../llvm-overloaded-tests/vlsseg8e16.c | 5 ++++- .../llvm-overloaded-tests/vluxei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg2ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg3ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg4ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg5ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg6ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg7ei16.c | 5 ++++- .../llvm-overloaded-tests/vluxseg8ei16.c | 5 ++++- .../rvv_intrinsic_gen/generator.py | 17 ++++++++++++++--- 245 files changed, 937 insertions(+), 247 deletions(-) diff --git a/auto-generated/bfloat16/llvm-api-tests/vcreate.c b/auto-generated/bfloat16/llvm-api-tests/vcreate.c index 7e58be2af..5b5dab6a8 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vcreate.c +++ b/auto-generated/bfloat16/llvm-api-tests/vcreate.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c index 758e0275a..4a81a4e71 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c index 3be23d2d7..0572244e5 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c index 3caecfecf..c1ba47c29 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vget.c b/auto-generated/bfloat16/llvm-api-tests/vget.c index e2ff800e2..61473a4ea 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vget.c +++ b/auto-generated/bfloat16/llvm-api-tests/vget.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vle16.c b/auto-generated/bfloat16/llvm-api-tests/vle16.c index 706e5a3d2..db5ed90dc 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vle16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vle16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/llvm-api-tests/vle16ff.c index d11f38c52..c1bd752af 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vle16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vle16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxei16.c index b6f66f876..a8e7d4dc5 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c index 0f665b784..31478d5a5 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c index e7230dbfb..a0e1c2eb3 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c index c6cd684be..0c3b9c66f 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c index d182402b2..99b75edcd 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c index 331b62970..d700d64e2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c index 82512cb83..218746d8d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c index c58f38d51..1e4fa305e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/llvm-api-tests/vlse16.c index 6022f983b..f9cb9fb39 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlse16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c index 04cbe00c1..98770a402 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c index a4d658aaa..72f3d77cd 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c index 6d369cd3b..e7f6fd2d1 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c index 255f184fc..a71b248e6 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c index 438025115..597738d92 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c index cb31af531..1531d9221 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c index df0aa5c75..1c894ce64 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c index d8266918a..3672f2061 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c index a491aed92..554ec2d93 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c index 23045a077..419c4ab8e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c index db9b1d308..42ffe0707 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c index 55c892349..8926f6553 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c index 573492dd1..fa7278cb6 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c index ff2c20890..61accef57 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c index 638e86ea2..cd13c384c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c index 6a4d657ba..68f6d7be4 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c index 482158e65..ae5296c76 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c index 39ef6a491..10f84f99a 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c index df164cc46..c39c63830 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c index cbb3b4ba2..943c4a19e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c index 47d2c6b78..cda03cedf 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxei16.c index ae522bdb5..0bacd5b35 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c index f8aaf16c9..c54b55e80 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c index 879699ef3..146f46088 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c index f0a39b2f6..3bb62589b 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c index f5204e631..42b121802 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c index e45755a1f..3059d8cad 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c index c65fe2725..9ec7135aa 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c index c2c40bd07..5fc52ed9a 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c index 1921103df..b9143958e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c +++ b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vse16.c b/auto-generated/bfloat16/llvm-api-tests/vse16.c index c08e753e3..1e38c43cc 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vse16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vset.c b/auto-generated/bfloat16/llvm-api-tests/vset.c index 684944f27..b38841089 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vset.c +++ b/auto-generated/bfloat16/llvm-api-tests/vset.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c index 687d1ca3e..83f1fd347 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c index bb7579a9e..c15bada5f 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c index bc1ccabdc..65fcbd53e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c index 72343e757..a19267e36 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c index 418bf6b76..4ed520162 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c index d8b35331f..c7f6d2afb 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c index b4a0b7ad9..1546a88f2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c index 2ae10d065..c507e28db 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsse16.c b/auto-generated/bfloat16/llvm-api-tests/vsse16.c index b14ca1790..a066460f5 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsse16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c index 149887fd7..2888efb67 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c index a9627a0d3..dffee040e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c index 9f808d494..87e8309e8 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c index 920af0849..2bcd2d84c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c index 6d1b46b04..4520fc7eb 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c index 6dbc90c56..293726b5e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c index 0169db97f..7245244d8 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c index 1af94e9a1..763b20621 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c index 4a3efc6d7..fb8ff1d5b 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c index 34b822db7..f5d97c2cc 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c index a4b10760f..9c4cef9d2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c index ccb7fd991..d0c431508 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c index e8ca20934..7a1f763b2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c index f8b1755b8..ee2b52988 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c index eaed69275..67212ebfb 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c index 7f251c5a1..43721fe78 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c index e18b7ae84..840e504bf 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c index 19381a4ab..ad768d4ab 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c index 47c57f8cf..3cdf1d112 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c index 4b627226e..32d69a6a3 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c index e4c378f52..a041297a8 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c index bff55d0bc..61f23e3aa 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vundefined.c b/auto-generated/bfloat16/llvm-api-tests/vundefined.c index 317226a6f..0683dd94e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vundefined.c +++ b/auto-generated/bfloat16/llvm-api-tests/vundefined.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c index cca27ae83..4abf6b8b5 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c index 1668c7b2b..b9fd6c616 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c index da2042680..beac2b32a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vget.c b/auto-generated/bfloat16/llvm-overloaded-tests/vget.c index 7e40b6803..b29a6dcea 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vget.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vget.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c index 44216082e..62d9a4461 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c index 2a31a4bdf..16c0e3a6a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c index a45e15a72..7a110728b 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c index a7e9fe153..c90723a64 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c index a1ffafeeb..932af6df8 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c index 6aa3b8b8b..0248eceb9 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c index 85f3d7cfb..0b6a5545a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c index 58b6d16de..15bab6e22 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c index c08ab2e27..35f3110eb 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c index b0b7671a0..da2ae96f3 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c index 81cd36f7c..919bcd2f8 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c index 153ae20e8..03050648a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c index 4e7d8b491..c793a9c54 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c index 9da26c4e7..49020d2a7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c index edc0f10a7..d70aad5d6 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c index 59cd0aa8c..63c9ea79a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c index de5a83c0f..0d64c4f33 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c index 633901209..75127d1eb 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c index 34c4f9a64..c5a2a0154 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c index a9bd7d4d7..e5fcc000a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c index 8692c352c..685309270 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c index 2d530f29c..c00fb5fee 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c index eb6f7209e..2cf1a2a78 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c index eb8cf5abc..eaf8cce70 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c index 4fb0315c2..16fc33400 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c index c72674659..42950ac4f 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c index 23835bac4..9f016d5ed 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c index 34a27b713..f712d7d7b 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c index 1210a75cb..0add09a89 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c index 5a6eab3b3..1b0d9eadd 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c index 55bb1f469..02ab02068 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c index 8a570e3ba..1b8457c0d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c index 93b88bfa9..96a8514fa 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c index 85550720b..e0a2958f6 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c index 817a9422e..9a4f56698 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c index 2b7f3ec0e..b2dbece41 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c index 5eb3f2650..f2bff6a59 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c index 60faacb65..bfb5959fe 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c index 37aaab710..2ba7386ca 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c index 3004d79da..4b0e1bb01 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c index fc27aafcc..83d0af7de 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c index 1a06e8510..a87045603 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vset.c b/auto-generated/bfloat16/llvm-overloaded-tests/vset.c index 6bedaa3dc..849480e67 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vset.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vset.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c index b4acd8965..007b74642 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c index 033cfa2b3..9a27bb5d2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c index 7d172c80e..19cd07be9 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c index 4067814b2..e21805337 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c index f8d0e1fe2..fa96e304d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c index e6e8650f8..3572e5116 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c index d79d49e70..7b99b6239 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c index 4bd5455bb..989430e15 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c index 9c9fde087..98f228e23 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c index 0ddd0c89a..581644b69 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c index 095aefebc..d68d5d475 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c index f1f219558..edc7e396e 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c index e419b9d35..7f59d47b4 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c index 07bc65325..5388501b2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c index 9ed16e7b0..1d67708e0 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c index c5e78e91e..2ca7b5488 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c index 4cf01c969..a923b682f 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c index 81c3084f5..4ceac5fc2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c index 93435cac2..46f7ad43f 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c index db8cabb41..8d2c14637 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c index 8f695c281..d73f4b713 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c index 3ca13b74a..d70ca8db5 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c index 148a9aac2..98c949644 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c index 1b912128e..99298c79d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c index 80932af68..6ef154030 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c index cd9bb773a..a1c1c3435 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c index 82e5f338e..c5f3c5cb4 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c index af47c04f4..3c7d47dd3 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c index 4a6bf7b1c..942e4ff4f 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c index 623b13686..68ec0a391 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c index 80cd13e64..838e6b721 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c index d60ec839a..83ce220b6 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c index 40457cca7..8d122d46d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c index 7f553ef2c..d75f88788 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c index 7f867a33a..b3fb167d5 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c index 7d322392b..56c726802 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c index e4b825dc0..cfd980a4c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c index ca3b6fabd..5c08a6033 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c index 00079d8f3..7d0430ef0 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c index 82216a3e8..a1efb7a50 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c index b58c9d736..4ab0d7765 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c index fd1a29cb5..f70929941 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c index 006dbe150..0acfa2ed3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c index 31cfcbcb6..9be5f86ee 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c index c92aeac61..b8a27eb29 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c index e1d4f2021..9b875c7a7 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c index 40d3eaef2..a3fcb918e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c index 0f91bc862..524c2e04d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c index d3c99c86b..4d7259c39 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c index 7ce765f65..911393203 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c index 9ebbf8c2a..9be138700 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c index 7867d0dd2..1733dfada 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c index 100a3d306..20ac32642 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c index a7db2bf57..81aee1fb4 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c index 875b6cf36..2f518e669 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c index 25027618d..de7cc556d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c index 81dcc21e5..01d911246 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c index 3ce55cfef..65cc25d2f 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c index fcdd9e2a9..5131aa333 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c index 170f32729..945a45ea8 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c index 9ce1303db..bc7156ecf 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c index 61987c255..58d9f21a0 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c index 016a97c3b..89f3d4ee5 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c index 07183d1f2..923639412 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c index f3168d419..cfdf51e07 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c index 1ec2c9ad3..37c8ad0f3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c index 771f246bd..ee779135d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c index 53889a648..ae282f833 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c index fdbf90f7e..92d8c8e6e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c index f43a84004..f5cb0229f 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c index d3b4e0e9d..8b30d05fb 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c index 7aea5181d..c882cb570 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c index b1b054efc..dd2a0c5bd 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c index f51f026b4..d7c2f914b 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c index c7ff40760..57b08ec56 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c index 833cbe02b..f3cf547d3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c index 042be3c8d..7e97888b8 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c index 22e3de754..b4d522202 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c index 833af8360..1606b1fbc 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c index 055a93c69..030ec50fe 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c index a362719ed..bfd06a7c9 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c index 1583c701e..bc2236ffe 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c index 1fe84c1e0..bf63f5b4d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c index f40b058f9..e96465773 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c index 21658c08e..9e6ba4c51 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c index ef77dd579..c928c645d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c index 656f0c9cf..9d84571b0 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c index f8c41d6d8..e6ea668b7 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c index b9f222a67..b2dcd6244 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c index f4da7648e..10ff7fa72 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c index ada8862cc..99aafb0ea 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c index 39ba8bc15..f60dc61d2 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c index c99a67e18..6044ae82c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c index 21b323760..d84438041 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c index 557fb3830..e45b62947 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c index 6448d8224..f2e2a29e8 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c index bcf162249..5a8553113 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c index b86639e89..9cc747362 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c index 75ee14121..597884116 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c index a1055a16b..5fe5b9d44 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c index 422a03063..dfbe9f89e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c index 8f481a921..fa0322aae 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c @@ -1,6 +1,8 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c index b2425cea0..976c21a17 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c index f5de0447d..ed857c46d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c index 5ac99c723..311d4477d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c index 5ae1c9339..a47eb41a5 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c index 10ee4813e..df9e43fd1 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c index f470b0411..e0dee0e84 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c index 4d39fd4b3..c41b7d405 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c index 292f41ef5..154bb4c04 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c index 424a7553c..ec299a4a7 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c index fb030ff84..8639b8a86 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c index 1d7889b7e..be038080b 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c index c3d3a8930..86786793d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c index 925c82dd7..252b00479 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c index b8826ce6a..485d088be 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c index 2f3d5d06a..b0e17955e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 949ad08a1..517904f4c 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -443,7 +443,7 @@ def __init__(self, f, is_overloaded, toolchain_type, has_tail_policy): # different op name self.test_file_names = [] - def write_file_header(self, has_float_type): + def write_file_header(self, has_float_type, has_bfloat16_type): #pylint: disable=line-too-long int_llvm_header = (r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ @@ -457,6 +457,14 @@ def write_file_header(self, has_float_type): // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s +""") + bfloat16_llvm_header = (r"""// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + """) gnu_header = ( r"""/* { dg-do compile } */ @@ -465,7 +473,9 @@ def write_file_header(self, has_float_type): """) if self.toolchain_type == ToolChainType.LLVM: - if has_float_type: + if has_bfloat16_type: + self.fd.write(bfloat16_llvm_header) + elif has_float_type: self.fd.write(float_llvm_header) else: self.fd.write(int_llvm_header) @@ -528,6 +538,7 @@ def func(self, inst_info, name, return_type, **kwargs): # righteously, there should be a function to determine if an intrinsic # has a floating-point variant and have the header emission depend on it. has_float_type = func_decl.find("vfloat") != -1 + has_bfloat16_type = func_decl.find("bf16") != -1 # NOTE(FIXME): This is logic as a hard fix to test case header emission. has_float_type_variant_inst = [ "macc", "nmacc", "msac", "nmsac", "madd", "nmadd", "msub", "nmsub", @@ -540,7 +551,7 @@ def func(self, inst_info, name, return_type, **kwargs): has_float_type = True if header: - self.write_file_header(has_float_type) + self.write_file_header(has_float_type, has_bfloat16_type) def output_call_arg(arg_name, type_name): if ((name.startswith("vget") or name.startswith("vset")) \ From c783839de5a59480d09f1a9b69257fca4096434e Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Tue, 23 Apr 2024 09:46:57 -0700 Subject: [PATCH 018/151] Handle bfloat16 in misc_op_template.py --- .../bfloat16/api-testing/vlmul_ext_v.c | 60 ++++++++--------- .../bfloat16/api-testing/vlmul_trunc_v.c | 60 ++++++++--------- auto-generated/bfloat16/intrinsic_funcs.adoc | 60 ++++++++--------- ...scellaneous_vector_utility_intrinsics.adoc | 60 ++++++++--------- .../bfloat16/llvm-api-tests/vlmul_ext_v.c | 65 ++++++++++--------- .../bfloat16/llvm-api-tests/vlmul_trunc_v.c | 65 ++++++++++--------- .../llvm-overloaded-tests/vlmul_ext_v.c | 65 ++++++++++--------- .../llvm-overloaded-tests/vlmul_trunc_v.c | 65 ++++++++++--------- .../overloaded-api-testing/vlmul_ext_v.c | 60 ++++++++--------- .../overloaded-api-testing/vlmul_trunc_v.c | 60 ++++++++--------- .../bfloat16/overloaded_intrinsic_funcs.adoc | 60 ++++++++--------- ...scellaneous_vector_utility_intrinsics.adoc | 60 ++++++++--------- .../templates/misc_op_template.py | 5 +- 13 files changed, 380 insertions(+), 365 deletions(-) diff --git a/auto-generated/bfloat16/api-testing/vlmul_ext_v.c b/auto-generated/bfloat16/api-testing/vlmul_ext_v.c index 1b9fdf349..75285f967 100644 --- a/auto-generated/bfloat16/api-testing/vlmul_ext_v.c +++ b/auto-generated/bfloat16/api-testing/vlmul_ext_v.c @@ -1,62 +1,62 @@ #include #include -vbfloat16mf2_t test_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16mf2(value); +vbfloat16mf2_t test_vlmul_ext_v_bf16mf4_bf16mf2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16mf2(value); } -vbfloat16m1_t test_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16m1(value); +vbfloat16m1_t test_vlmul_ext_v_bf16mf4_bf16m1(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16m1(value); } -vbfloat16m2_t test_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16mf4_bf16m2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16mf4_bf16m4(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16mf4_bf16m8(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16m8(value); } -vbfloat16m1_t test_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_v_b16mf2_b16m1(value); +vbfloat16m1_t test_vlmul_ext_v_bf16mf2_bf16m1(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_bf16mf2_bf16m1(value); } -vbfloat16m2_t test_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_v_b16mf2_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16mf2_bf16m2(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_bf16mf2_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_v_b16mf2_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16mf2_bf16m4(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_bf16mf2_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_v_b16mf2_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16mf2_bf16m8(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_bf16mf2_bf16m8(value); } -vbfloat16m2_t test_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value) { - return __riscv_vlmul_ext_v_b16m1_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16m1_bf16m2(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_bf16m1_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value) { - return __riscv_vlmul_ext_v_b16m1_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16m1_bf16m4(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_bf16m1_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value) { - return __riscv_vlmul_ext_v_b16m1_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m1_bf16m8(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_bf16m1_bf16m8(value); } -vbfloat16m4_t test_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value) { - return __riscv_vlmul_ext_v_b16m2_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16m2_bf16m4(vbfloat16m2_t value) { + return __riscv_vlmul_ext_v_bf16m2_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value) { - return __riscv_vlmul_ext_v_b16m2_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m2_bf16m8(vbfloat16m2_t value) { + return __riscv_vlmul_ext_v_bf16m2_bf16m8(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value) { - return __riscv_vlmul_ext_v_b16m4_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m4_bf16m8(vbfloat16m4_t value) { + return __riscv_vlmul_ext_v_bf16m4_bf16m8(value); } diff --git a/auto-generated/bfloat16/api-testing/vlmul_trunc_v.c b/auto-generated/bfloat16/api-testing/vlmul_trunc_v.c index 62c0d056a..97495502a 100644 --- a/auto-generated/bfloat16/api-testing/vlmul_trunc_v.c +++ b/auto-generated/bfloat16/api-testing/vlmul_trunc_v.c @@ -1,62 +1,62 @@ #include #include -vbfloat16mf4_t test_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value) { - return __riscv_vlmul_trunc_v_b16mf2_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16mf2_bf16mf4(vbfloat16mf2_t value) { + return __riscv_vlmul_trunc_v_bf16mf2_bf16mf4(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value) { - return __riscv_vlmul_trunc_v_b16m1_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m1_bf16mf4(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_v_bf16m1_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value) { - return __riscv_vlmul_trunc_v_b16m1_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m1_bf16mf2(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_v_bf16m1_bf16mf2(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_v_b16m2_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m2_bf16mf4(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_bf16m2_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_v_b16m2_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m2_bf16mf2(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_bf16m2_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_v_b16m2_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m2_bf16m1(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_bf16m2_bf16m1(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_v_b16m4_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m4_bf16mf4(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_bf16m4_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_v_b16m4_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m4_bf16mf2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_bf16m4_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_v_b16m4_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m4_bf16m1(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_bf16m4_bf16m1(value); } -vbfloat16m2_t test_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_v_b16m4_b16m2(value); +vbfloat16m2_t test_vlmul_trunc_v_bf16m4_bf16m2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_bf16m4_bf16m2(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m8_bf16mf4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m8_bf16mf2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m8_bf16m1(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16m1(value); } -vbfloat16m2_t test_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16m2(value); +vbfloat16m2_t test_vlmul_trunc_v_bf16m8_bf16m2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16m2(value); } -vbfloat16m4_t test_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16m4(value); +vbfloat16m4_t test_vlmul_trunc_v_bf16m8_bf16m4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16m4(value); } diff --git a/auto-generated/bfloat16/intrinsic_funcs.adoc b/auto-generated/bfloat16/intrinsic_funcs.adoc index ab7d0febb..3bd1a4222 100644 --- a/auto-generated/bfloat16/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs.adoc @@ -1582,21 +1582,21 @@ vuint16m8_t __riscv_vreinterpret_v_bf16m8_u16m8(vbfloat16m8_t src); [,c] ---- -vbfloat16mf2_t __riscv_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value); -vbfloat16m1_t __riscv_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value); -vbfloat16m2_t __riscv_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value); -vbfloat16m4_t __riscv_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value); -vbfloat16m1_t __riscv_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value); -vbfloat16m2_t __riscv_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value); -vbfloat16m4_t __riscv_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value); -vbfloat16m2_t __riscv_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value); -vbfloat16m4_t __riscv_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value); -vbfloat16m4_t __riscv_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_ext_v_bf16mf4_bf16mf2(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_v_bf16mf4_bf16m1(vbfloat16mf4_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_bf16mf4_bf16m2(vbfloat16mf4_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_bf16mf4_bf16m4(vbfloat16mf4_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16mf4_bf16m8(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_v_bf16mf2_bf16m1(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_bf16mf2_bf16m2(vbfloat16mf2_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_bf16mf2_bf16m4(vbfloat16mf2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16mf2_bf16m8(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_bf16m1_bf16m2(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_bf16m1_bf16m4(vbfloat16m1_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16m1_bf16m8(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_bf16m2_bf16m4(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16m2_bf16m8(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16m4_bf16m8(vbfloat16m4_t value); ---- [[vector-lmul-truncation]] @@ -1604,21 +1604,21 @@ vbfloat16m8_t __riscv_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value); [,c] ---- -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value); -vbfloat16m1_t __riscv_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value); -vbfloat16m1_t __riscv_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value); -vbfloat16m2_t __riscv_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value); -vbfloat16m1_t __riscv_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value); -vbfloat16m2_t __riscv_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value); -vbfloat16m4_t __riscv_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16mf2_bf16mf4(vbfloat16mf2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16m1_bf16mf4(vbfloat16m1_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_bf16m1_bf16mf2(vbfloat16m1_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16m2_bf16mf4(vbfloat16m2_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_bf16m2_bf16mf2(vbfloat16m2_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_bf16m2_bf16m1(vbfloat16m2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16m4_bf16mf4(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_bf16m4_bf16mf2(vbfloat16m4_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_bf16m4_bf16m1(vbfloat16m4_t value); +vbfloat16m2_t __riscv_vlmul_trunc_v_bf16m4_bf16m2(vbfloat16m4_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16m8_bf16mf4(vbfloat16m8_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_bf16m8_bf16mf2(vbfloat16m8_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_bf16m8_bf16m1(vbfloat16m8_t value); +vbfloat16m2_t __riscv_vlmul_trunc_v_bf16m8_bf16m2(vbfloat16m8_t value); +vbfloat16m4_t __riscv_vlmul_trunc_v_bf16m8_bf16m4(vbfloat16m8_t value); ---- [[vector-initialization]] diff --git a/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc index ddbf93b7f..9843290f7 100644 --- a/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc @@ -38,21 +38,21 @@ vuint16m8_t __riscv_vreinterpret_v_bf16m8_u16m8(vbfloat16m8_t src); [,c] ---- -vbfloat16mf2_t __riscv_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value); -vbfloat16m1_t __riscv_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value); -vbfloat16m2_t __riscv_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value); -vbfloat16m4_t __riscv_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value); -vbfloat16m1_t __riscv_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value); -vbfloat16m2_t __riscv_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value); -vbfloat16m4_t __riscv_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value); -vbfloat16m2_t __riscv_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value); -vbfloat16m4_t __riscv_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value); -vbfloat16m4_t __riscv_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value); -vbfloat16m8_t __riscv_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_ext_v_bf16mf4_bf16mf2(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_v_bf16mf4_bf16m1(vbfloat16mf4_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_bf16mf4_bf16m2(vbfloat16mf4_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_bf16mf4_bf16m4(vbfloat16mf4_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16mf4_bf16m8(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_v_bf16mf2_bf16m1(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_bf16mf2_bf16m2(vbfloat16mf2_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_bf16mf2_bf16m4(vbfloat16mf2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16mf2_bf16m8(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_v_bf16m1_bf16m2(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_bf16m1_bf16m4(vbfloat16m1_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16m1_bf16m8(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_v_bf16m2_bf16m4(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16m2_bf16m8(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_v_bf16m4_bf16m8(vbfloat16m4_t value); ---- [[vector-lmul-truncation]] @@ -60,21 +60,21 @@ vbfloat16m8_t __riscv_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value); [,c] ---- -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value); -vbfloat16m1_t __riscv_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value); -vbfloat16m1_t __riscv_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value); -vbfloat16m2_t __riscv_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value); -vbfloat16m1_t __riscv_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value); -vbfloat16m2_t __riscv_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value); -vbfloat16m4_t __riscv_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16mf2_bf16mf4(vbfloat16mf2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16m1_bf16mf4(vbfloat16m1_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_bf16m1_bf16mf2(vbfloat16m1_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16m2_bf16mf4(vbfloat16m2_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_bf16m2_bf16mf2(vbfloat16m2_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_bf16m2_bf16m1(vbfloat16m2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16m4_bf16mf4(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_bf16m4_bf16mf2(vbfloat16m4_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_bf16m4_bf16m1(vbfloat16m4_t value); +vbfloat16m2_t __riscv_vlmul_trunc_v_bf16m4_bf16m2(vbfloat16m4_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_v_bf16m8_bf16mf4(vbfloat16m8_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_v_bf16m8_bf16mf2(vbfloat16m8_t value); +vbfloat16m1_t __riscv_vlmul_trunc_v_bf16m8_bf16m1(vbfloat16m8_t value); +vbfloat16m2_t __riscv_vlmul_trunc_v_bf16m8_bf16m2(vbfloat16m8_t value); +vbfloat16m4_t __riscv_vlmul_trunc_v_bf16m8_bf16m4(vbfloat16m8_t value); ---- [[vector-initialization]] diff --git a/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c b/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c index 11c86330d..a9cc1c367 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c @@ -1,66 +1,69 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vbfloat16mf2_t test_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16mf2(value); +vbfloat16mf2_t test_vlmul_ext_v_bf16mf4_bf16mf2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16mf2(value); } -vbfloat16m1_t test_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16m1(value); +vbfloat16m1_t test_vlmul_ext_v_bf16mf4_bf16m1(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16m1(value); } -vbfloat16m2_t test_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16mf4_bf16m2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16mf4_bf16m4(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_v_b16mf4_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16mf4_bf16m8(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_v_bf16mf4_bf16m8(value); } -vbfloat16m1_t test_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_v_b16mf2_b16m1(value); +vbfloat16m1_t test_vlmul_ext_v_bf16mf2_bf16m1(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_bf16mf2_bf16m1(value); } -vbfloat16m2_t test_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_v_b16mf2_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16mf2_bf16m2(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_bf16mf2_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_v_b16mf2_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16mf2_bf16m4(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_bf16mf2_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_v_b16mf2_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16mf2_bf16m8(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_v_bf16mf2_bf16m8(value); } -vbfloat16m2_t test_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value) { - return __riscv_vlmul_ext_v_b16m1_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16m1_bf16m2(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_bf16m1_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value) { - return __riscv_vlmul_ext_v_b16m1_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16m1_bf16m4(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_bf16m1_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value) { - return __riscv_vlmul_ext_v_b16m1_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m1_bf16m8(vbfloat16m1_t value) { + return __riscv_vlmul_ext_v_bf16m1_bf16m8(value); } -vbfloat16m4_t test_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value) { - return __riscv_vlmul_ext_v_b16m2_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16m2_bf16m4(vbfloat16m2_t value) { + return __riscv_vlmul_ext_v_bf16m2_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value) { - return __riscv_vlmul_ext_v_b16m2_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m2_bf16m8(vbfloat16m2_t value) { + return __riscv_vlmul_ext_v_bf16m2_bf16m8(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value) { - return __riscv_vlmul_ext_v_b16m4_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m4_bf16m8(vbfloat16m4_t value) { + return __riscv_vlmul_ext_v_bf16m4_bf16m8(value); } diff --git a/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c b/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c index dcb7ffdad..9bdca7bca 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c @@ -1,66 +1,69 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vbfloat16mf4_t test_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value) { - return __riscv_vlmul_trunc_v_b16mf2_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16mf2_bf16mf4(vbfloat16mf2_t value) { + return __riscv_vlmul_trunc_v_bf16mf2_bf16mf4(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value) { - return __riscv_vlmul_trunc_v_b16m1_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m1_bf16mf4(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_v_bf16m1_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value) { - return __riscv_vlmul_trunc_v_b16m1_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m1_bf16mf2(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_v_bf16m1_bf16mf2(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_v_b16m2_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m2_bf16mf4(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_bf16m2_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_v_b16m2_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m2_bf16mf2(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_bf16m2_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_v_b16m2_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m2_bf16m1(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_v_bf16m2_bf16m1(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_v_b16m4_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m4_bf16mf4(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_bf16m4_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_v_b16m4_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m4_bf16mf2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_bf16m4_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_v_b16m4_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m4_bf16m1(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_bf16m4_bf16m1(value); } -vbfloat16m2_t test_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_v_b16m4_b16m2(value); +vbfloat16m2_t test_vlmul_trunc_v_bf16m4_bf16m2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_v_bf16m4_bf16m2(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m8_bf16mf4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m8_bf16mf2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m8_bf16m1(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16m1(value); } -vbfloat16m2_t test_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16m2(value); +vbfloat16m2_t test_vlmul_trunc_v_bf16m8_bf16m2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16m2(value); } -vbfloat16m4_t test_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_v_b16m8_b16m4(value); +vbfloat16m4_t test_vlmul_trunc_v_bf16m8_bf16m4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_v_bf16m8_bf16m4(value); } diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c index 311acc90b..d8b6216c7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c @@ -1,66 +1,69 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vbfloat16mf2_t test_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16mf2(value); +vbfloat16mf2_t test_vlmul_ext_v_bf16mf4_bf16mf2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16mf2(value); } -vbfloat16m1_t test_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16m1(value); +vbfloat16m1_t test_vlmul_ext_v_bf16mf4_bf16m1(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16m1(value); } -vbfloat16m2_t test_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16mf4_bf16m2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16mf4_bf16m4(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16mf4_bf16m8(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16m8(value); } -vbfloat16m1_t test_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_b16m1(value); +vbfloat16m1_t test_vlmul_ext_v_bf16mf2_bf16m1(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_bf16m1(value); } -vbfloat16m2_t test_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16mf2_bf16m2(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16mf2_bf16m4(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16mf2_bf16m8(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_bf16m8(value); } -vbfloat16m2_t test_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value) { - return __riscv_vlmul_ext_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16m1_bf16m2(vbfloat16m1_t value) { + return __riscv_vlmul_ext_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value) { - return __riscv_vlmul_ext_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16m1_bf16m4(vbfloat16m1_t value) { + return __riscv_vlmul_ext_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m1_bf16m8(vbfloat16m1_t value) { + return __riscv_vlmul_ext_bf16m8(value); } -vbfloat16m4_t test_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value) { - return __riscv_vlmul_ext_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16m2_bf16m4(vbfloat16m2_t value) { + return __riscv_vlmul_ext_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m2_bf16m8(vbfloat16m2_t value) { + return __riscv_vlmul_ext_bf16m8(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m4_bf16m8(vbfloat16m4_t value) { + return __riscv_vlmul_ext_bf16m8(value); } diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c index 6965aa520..826c0938c 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c @@ -1,66 +1,69 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ +// RUN: -target-feature +experimental-zvfh \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vbfloat16mf4_t test_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16mf2_bf16mf4(vbfloat16mf2_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m1_bf16mf4(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value) { - return __riscv_vlmul_trunc_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m1_bf16mf2(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_bf16mf2(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m2_bf16mf4(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m2_bf16mf2(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m2_bf16m1(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_bf16m1(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m4_bf16mf4(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m4_bf16mf2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m4_bf16m1(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_bf16m1(value); } -vbfloat16m2_t test_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_b16m2(value); +vbfloat16m2_t test_vlmul_trunc_v_bf16m4_bf16m2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_bf16m2(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m8_bf16mf4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m8_bf16mf2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m8_bf16m1(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16m1(value); } -vbfloat16m2_t test_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16m2(value); +vbfloat16m2_t test_vlmul_trunc_v_bf16m8_bf16m2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16m2(value); } -vbfloat16m4_t test_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16m4(value); +vbfloat16m4_t test_vlmul_trunc_v_bf16m8_bf16m4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16m4(value); } diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlmul_ext_v.c b/auto-generated/bfloat16/overloaded-api-testing/vlmul_ext_v.c index bd60827ff..b26e1401c 100644 --- a/auto-generated/bfloat16/overloaded-api-testing/vlmul_ext_v.c +++ b/auto-generated/bfloat16/overloaded-api-testing/vlmul_ext_v.c @@ -1,62 +1,62 @@ #include #include -vbfloat16mf2_t test_vlmul_ext_v_b16mf4_b16mf2(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16mf2(value); +vbfloat16mf2_t test_vlmul_ext_v_bf16mf4_bf16mf2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16mf2(value); } -vbfloat16m1_t test_vlmul_ext_v_b16mf4_b16m1(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16m1(value); +vbfloat16m1_t test_vlmul_ext_v_bf16mf4_bf16m1(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16m1(value); } -vbfloat16m2_t test_vlmul_ext_v_b16mf4_b16m2(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16mf4_bf16m2(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16mf4_b16m4(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16mf4_bf16m4(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16mf4_b16m8(vbfloat16mf4_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16mf4_bf16m8(vbfloat16mf4_t value) { + return __riscv_vlmul_ext_bf16m8(value); } -vbfloat16m1_t test_vlmul_ext_v_b16mf2_b16m1(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_b16m1(value); +vbfloat16m1_t test_vlmul_ext_v_bf16mf2_bf16m1(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_bf16m1(value); } -vbfloat16m2_t test_vlmul_ext_v_b16mf2_b16m2(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16mf2_bf16m2(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16mf2_b16m4(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16mf2_bf16m4(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16mf2_b16m8(vbfloat16mf2_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16mf2_bf16m8(vbfloat16mf2_t value) { + return __riscv_vlmul_ext_bf16m8(value); } -vbfloat16m2_t test_vlmul_ext_v_b16m1_b16m2(vbfloat16m1_t value) { - return __riscv_vlmul_ext_b16m2(value); +vbfloat16m2_t test_vlmul_ext_v_bf16m1_bf16m2(vbfloat16m1_t value) { + return __riscv_vlmul_ext_bf16m2(value); } -vbfloat16m4_t test_vlmul_ext_v_b16m1_b16m4(vbfloat16m1_t value) { - return __riscv_vlmul_ext_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16m1_bf16m4(vbfloat16m1_t value) { + return __riscv_vlmul_ext_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m1_b16m8(vbfloat16m1_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m1_bf16m8(vbfloat16m1_t value) { + return __riscv_vlmul_ext_bf16m8(value); } -vbfloat16m4_t test_vlmul_ext_v_b16m2_b16m4(vbfloat16m2_t value) { - return __riscv_vlmul_ext_b16m4(value); +vbfloat16m4_t test_vlmul_ext_v_bf16m2_bf16m4(vbfloat16m2_t value) { + return __riscv_vlmul_ext_bf16m4(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m2_b16m8(vbfloat16m2_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m2_bf16m8(vbfloat16m2_t value) { + return __riscv_vlmul_ext_bf16m8(value); } -vbfloat16m8_t test_vlmul_ext_v_b16m4_b16m8(vbfloat16m4_t value) { - return __riscv_vlmul_ext_b16m8(value); +vbfloat16m8_t test_vlmul_ext_v_bf16m4_bf16m8(vbfloat16m4_t value) { + return __riscv_vlmul_ext_bf16m8(value); } diff --git a/auto-generated/bfloat16/overloaded-api-testing/vlmul_trunc_v.c b/auto-generated/bfloat16/overloaded-api-testing/vlmul_trunc_v.c index 08791bc2a..96b46c1e8 100644 --- a/auto-generated/bfloat16/overloaded-api-testing/vlmul_trunc_v.c +++ b/auto-generated/bfloat16/overloaded-api-testing/vlmul_trunc_v.c @@ -1,62 +1,62 @@ #include #include -vbfloat16mf4_t test_vlmul_trunc_v_b16mf2_b16mf4(vbfloat16mf2_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16mf2_bf16mf4(vbfloat16mf2_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m1_b16mf4(vbfloat16m1_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m1_bf16mf4(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m1_b16mf2(vbfloat16m1_t value) { - return __riscv_vlmul_trunc_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m1_bf16mf2(vbfloat16m1_t value) { + return __riscv_vlmul_trunc_bf16mf2(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m2_b16mf4(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m2_bf16mf4(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m2_b16mf2(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m2_bf16mf2(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m2_b16m1(vbfloat16m2_t value) { - return __riscv_vlmul_trunc_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m2_bf16m1(vbfloat16m2_t value) { + return __riscv_vlmul_trunc_bf16m1(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m4_b16mf4(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m4_bf16mf4(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m4_b16mf2(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m4_bf16mf2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m4_b16m1(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m4_bf16m1(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_bf16m1(value); } -vbfloat16m2_t test_vlmul_trunc_v_b16m4_b16m2(vbfloat16m4_t value) { - return __riscv_vlmul_trunc_b16m2(value); +vbfloat16m2_t test_vlmul_trunc_v_bf16m4_bf16m2(vbfloat16m4_t value) { + return __riscv_vlmul_trunc_bf16m2(value); } -vbfloat16mf4_t test_vlmul_trunc_v_b16m8_b16mf4(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16mf4(value); +vbfloat16mf4_t test_vlmul_trunc_v_bf16m8_bf16mf4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16mf4(value); } -vbfloat16mf2_t test_vlmul_trunc_v_b16m8_b16mf2(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16mf2(value); +vbfloat16mf2_t test_vlmul_trunc_v_bf16m8_bf16mf2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16mf2(value); } -vbfloat16m1_t test_vlmul_trunc_v_b16m8_b16m1(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16m1(value); +vbfloat16m1_t test_vlmul_trunc_v_bf16m8_bf16m1(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16m1(value); } -vbfloat16m2_t test_vlmul_trunc_v_b16m8_b16m2(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16m2(value); +vbfloat16m2_t test_vlmul_trunc_v_bf16m8_bf16m2(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16m2(value); } -vbfloat16m4_t test_vlmul_trunc_v_b16m8_b16m4(vbfloat16m8_t value) { - return __riscv_vlmul_trunc_b16m4(value); +vbfloat16m4_t test_vlmul_trunc_v_bf16m8_bf16m4(vbfloat16m8_t value) { + return __riscv_vlmul_trunc_bf16m4(value); } diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc index 78326373a..b5200a485 100644 --- a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc @@ -1162,21 +1162,21 @@ vuint16m8_t __riscv_vreinterpret_u16m8(vbfloat16m8_t src); [,c] ---- -vbfloat16mf2_t __riscv_vlmul_ext_b16mf2(vbfloat16mf4_t value); -vbfloat16m1_t __riscv_vlmul_ext_b16m1(vbfloat16mf4_t value); -vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16mf4_t value); -vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16mf4_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16mf4_t value); -vbfloat16m1_t __riscv_vlmul_ext_b16m1(vbfloat16mf2_t value); -vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16mf2_t value); -vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16mf2_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16mf2_t value); -vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16m1_t value); -vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16m1_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m1_t value); -vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16m2_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m2_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_ext_bf16mf2(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_bf16m1(vbfloat16mf4_t value); +vbfloat16m2_t __riscv_vlmul_ext_bf16m2(vbfloat16mf4_t value); +vbfloat16m4_t __riscv_vlmul_ext_bf16m4(vbfloat16mf4_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_bf16m1(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_bf16m2(vbfloat16mf2_t value); +vbfloat16m4_t __riscv_vlmul_ext_bf16m4(vbfloat16mf2_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_bf16m2(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_bf16m4(vbfloat16m1_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_bf16m4(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16m4_t value); ---- [[overloaded-vector-lmul-truncation]] @@ -1184,21 +1184,21 @@ vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m4_t value); [,c] ---- -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16mf2_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m1_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m1_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m2_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m2_t value); -vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m2_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m4_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m4_t value); -vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m4_t value); -vbfloat16m2_t __riscv_vlmul_trunc_b16m2(vbfloat16m4_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m8_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m8_t value); -vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m8_t value); -vbfloat16m2_t __riscv_vlmul_trunc_b16m2(vbfloat16m8_t value); -vbfloat16m4_t __riscv_vlmul_trunc_b16m4(vbfloat16m8_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16mf2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16m1_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_bf16mf2(vbfloat16m1_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16m2_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_bf16mf2(vbfloat16m2_t value); +vbfloat16m1_t __riscv_vlmul_trunc_bf16m1(vbfloat16m2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_bf16mf2(vbfloat16m4_t value); +vbfloat16m1_t __riscv_vlmul_trunc_bf16m1(vbfloat16m4_t value); +vbfloat16m2_t __riscv_vlmul_trunc_bf16m2(vbfloat16m4_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16m8_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_bf16mf2(vbfloat16m8_t value); +vbfloat16m1_t __riscv_vlmul_trunc_bf16m1(vbfloat16m8_t value); +vbfloat16m2_t __riscv_vlmul_trunc_bf16m2(vbfloat16m8_t value); +vbfloat16m4_t __riscv_vlmul_trunc_bf16m4(vbfloat16m8_t value); ---- [[overloaded-vector-initialization]] diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc index f06c83b9e..70ab53219 100644 --- a/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs/04_bfloat16_miscellaneous_vector_utility_intrinsics.adoc @@ -38,21 +38,21 @@ vuint16m8_t __riscv_vreinterpret_u16m8(vbfloat16m8_t src); [,c] ---- -vbfloat16mf2_t __riscv_vlmul_ext_b16mf2(vbfloat16mf4_t value); -vbfloat16m1_t __riscv_vlmul_ext_b16m1(vbfloat16mf4_t value); -vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16mf4_t value); -vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16mf4_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16mf4_t value); -vbfloat16m1_t __riscv_vlmul_ext_b16m1(vbfloat16mf2_t value); -vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16mf2_t value); -vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16mf2_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16mf2_t value); -vbfloat16m2_t __riscv_vlmul_ext_b16m2(vbfloat16m1_t value); -vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16m1_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m1_t value); -vbfloat16m4_t __riscv_vlmul_ext_b16m4(vbfloat16m2_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m2_t value); -vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_ext_bf16mf2(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_bf16m1(vbfloat16mf4_t value); +vbfloat16m2_t __riscv_vlmul_ext_bf16m2(vbfloat16mf4_t value); +vbfloat16m4_t __riscv_vlmul_ext_bf16m4(vbfloat16mf4_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16mf4_t value); +vbfloat16m1_t __riscv_vlmul_ext_bf16m1(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_bf16m2(vbfloat16mf2_t value); +vbfloat16m4_t __riscv_vlmul_ext_bf16m4(vbfloat16mf2_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16mf2_t value); +vbfloat16m2_t __riscv_vlmul_ext_bf16m2(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_bf16m4(vbfloat16m1_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16m1_t value); +vbfloat16m4_t __riscv_vlmul_ext_bf16m4(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16m2_t value); +vbfloat16m8_t __riscv_vlmul_ext_bf16m8(vbfloat16m4_t value); ---- [[overloaded-vector-lmul-truncation]] @@ -60,21 +60,21 @@ vbfloat16m8_t __riscv_vlmul_ext_b16m8(vbfloat16m4_t value); [,c] ---- -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16mf2_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m1_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m1_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m2_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m2_t value); -vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m2_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m4_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m4_t value); -vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m4_t value); -vbfloat16m2_t __riscv_vlmul_trunc_b16m2(vbfloat16m4_t value); -vbfloat16mf4_t __riscv_vlmul_trunc_b16mf4(vbfloat16m8_t value); -vbfloat16mf2_t __riscv_vlmul_trunc_b16mf2(vbfloat16m8_t value); -vbfloat16m1_t __riscv_vlmul_trunc_b16m1(vbfloat16m8_t value); -vbfloat16m2_t __riscv_vlmul_trunc_b16m2(vbfloat16m8_t value); -vbfloat16m4_t __riscv_vlmul_trunc_b16m4(vbfloat16m8_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16mf2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16m1_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_bf16mf2(vbfloat16m1_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16m2_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_bf16mf2(vbfloat16m2_t value); +vbfloat16m1_t __riscv_vlmul_trunc_bf16m1(vbfloat16m2_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16m4_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_bf16mf2(vbfloat16m4_t value); +vbfloat16m1_t __riscv_vlmul_trunc_bf16m1(vbfloat16m4_t value); +vbfloat16m2_t __riscv_vlmul_trunc_bf16m2(vbfloat16m4_t value); +vbfloat16mf4_t __riscv_vlmul_trunc_bf16mf4(vbfloat16m8_t value); +vbfloat16mf2_t __riscv_vlmul_trunc_bf16mf2(vbfloat16m8_t value); +vbfloat16m1_t __riscv_vlmul_trunc_bf16m1(vbfloat16m8_t value); +vbfloat16m2_t __riscv_vlmul_trunc_bf16m2(vbfloat16m8_t value); +vbfloat16m4_t __riscv_vlmul_trunc_bf16m4(vbfloat16m8_t value); ---- [[overloaded-vector-initialization]] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py index 9d38a0a9b..95b9a29ec 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py @@ -106,7 +106,10 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): continue type_helper = TypeHelper(**args) inst_info = InstInfo.get(args, decorator, inst_type) - args["TYPE1"] = args["TYPE"][0] + if args["TYPE"] == "bfloat": + args["TYPE1"] = args["TYPE"][0:2] + else: + args["TYPE1"] = args["TYPE"][0] func_name = "{OP}_{TYPE1}{SEW}m{LMUL}_{TYPE1}{SEW}m{DST_LMUL}".format_map( args) From 910b3c862a5bbbe79c7f46fca5600fae7deaacec Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 5 Jun 2024 11:11:56 +0800 Subject: [PATCH 019/151] Make Generator class becomes ABC - The following member functions will return NotImplemented if not implemented in derived classes to let user aware that the function call is not functioning - write() - write_title() - gen_prologue() - inst_group_prologue() - inst_group_epilogue() - post_gen() - The func function is set as abstract method, all the derived classes shoule have their own implementation - The original func implementation is copied to DocGenerator and APITestGenerator and replace the call of base func() implementation Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/generator.py | 45 ++++++++++--------- 1 file changed, 24 insertions(+), 21 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 517904f4c..8efd5b1ed 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -17,6 +17,7 @@ Generator classes that controls structures of the output. """ +from abc import ABC, abstractmethod import os import collections import re @@ -25,7 +26,7 @@ from enums import ToolChainType -class Generator(): +class Generator(ABC): """ Base class for all generators. """ @@ -36,29 +37,23 @@ def __init__(self): pass def write(self, text): - pass + return NotImplemented def write_title(self, text, link): - pass + return NotImplemented def gen_prologue(self): - pass + return NotImplemented def inst_group_prologue(self): - return "" + return NotImplemented def inst_group_epilogue(self): - return "" + return NotImplemented + @abstractmethod def func(self, inst_info, name, return_type, **kwargs): - # pylint: disable=unused-argument - # FIXME: inst_info is currently only used by RIFGenerator. - self.generated_functions_set.add(name) - args = ", ".join(map(lambda a: f"{a[1]} {a[0]}", kwargs.items())) - # "T * name" to "T *name" - args = args.replace("* ", "*") - s = f"{return_type} {name} ({args});\n" - return s + return NotImplemented def function_group(self, template, title, link, op_list, type_list, sew_list, lmul_list, decorator_list): @@ -74,7 +69,7 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, decorator_list=decorator_list) def start_group(self, group_name): - pass + return NotImplemented @staticmethod def func_name(name): @@ -296,7 +291,7 @@ def report_summary(self): \x1b[0mfunctions") def post_gen(self): - pass + return NotImplemented class DocGenerator(Generator): @@ -358,7 +353,13 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, def func(self, inst_info, name, return_type, **kwargs): name = Generator.func_name(name) - s = super().func(inst_info, name, return_type, **kwargs) + # pylint: disable=unused-argument + # FIXME: inst_info is currently only used by RIFGenerator. + self.generated_functions_set.add(name) + args = ", ".join(map(lambda a: f"{a[1]} {a[0]}", kwargs.items())) + # "T * name" to "T *name" + args = args.replace("* ", "*") + s = f"{return_type} {name} ({args});\n" self.write(s) def start_group(self, group_name): @@ -517,10 +518,12 @@ def func(self, inst_info, name, return_type, **kwargs): os.path.join(self.folder, test_file_name), mode, encoding="utf-8") stripped_prefix_non_overloaded_func_name = non_overloaded_func_name[8:] - func_decl = super().func(inst_info, - "test_" + stripped_prefix_non_overloaded_func_name, - return_type, **kwargs) - func_decl = func_decl.replace(" (", "(") + non_overloaded_func_name = "test_" + stripped_prefix_non_overloaded_func_name + self.generated_functions_set.add(non_overloaded_func_name) + args = ", ".join(map(lambda a: f"{a[1]} {a[0]}", kwargs.items())) + # "T * name" to "T *name" + args = args.replace("* ", "*") + func_decl = f"{return_type} {non_overloaded_func_name}({args});\n" # Strip redundant parameters in function declaration because the intrinsic # requires an immediate to be provided to the parameter. From 61e2a5c58c7d3cee01af35aa100e5d83d9c59df7 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 6 Jun 2024 16:17:48 +0800 Subject: [PATCH 020/151] [NFC] fix C0301: line too long Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 8efd5b1ed..6d7b68695 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -518,7 +518,8 @@ def func(self, inst_info, name, return_type, **kwargs): os.path.join(self.folder, test_file_name), mode, encoding="utf-8") stripped_prefix_non_overloaded_func_name = non_overloaded_func_name[8:] - non_overloaded_func_name = "test_" + stripped_prefix_non_overloaded_func_name + non_overloaded_func_name = "test_" + \ + stripped_prefix_non_overloaded_func_name self.generated_functions_set.add(non_overloaded_func_name) args = ", ".join(map(lambda a: f"{a[1]} {a[0]}", kwargs.items())) # "T * name" to "T *name" From c8875e83a7dcd368863533c31385ba900c78bed3 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 6 Jun 2024 16:18:41 +0800 Subject: [PATCH 021/151] [NFC] fix C0325: Unnecessary parens after '=' Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/generator.py | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 6d7b68695..0b8a5df35 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -446,27 +446,27 @@ def __init__(self, f, is_overloaded, toolchain_type, has_tail_policy): def write_file_header(self, has_float_type, has_bfloat16_type): #pylint: disable=line-too-long - int_llvm_header = (r"""// REQUIRES: riscv-registered-target + int_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s -""") - float_llvm_header = (r"""// REQUIRES: riscv-registered-target +""" + float_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ // RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s -""") - bfloat16_llvm_header = (r"""// REQUIRES: riscv-registered-target +""" + bfloat16_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s -""") +""" gnu_header = ( r"""/* { dg-do compile } */ /* { dg-options """ + '"' + "-march=rv64gcv_zvfh -mabi=lp64d" + From 4dba2255a4fdf28ea34ae34f29e5dc4e5b7eca57 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 6 Jun 2024 16:19:30 +0800 Subject: [PATCH 022/151] [NFC] fix W0613: Unused argument Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/generator.py | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 0b8a5df35..46d0e23a2 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -37,19 +37,19 @@ def __init__(self): pass def write(self, text): - return NotImplemented + raise NotImplementedError def write_title(self, text, link): - return NotImplemented + raise NotImplementedError def gen_prologue(self): - return NotImplemented + raise NotImplementedError def inst_group_prologue(self): - return NotImplemented + raise NotImplementedError def inst_group_epilogue(self): - return NotImplemented + raise NotImplementedError @abstractmethod def func(self, inst_info, name, return_type, **kwargs): @@ -69,7 +69,7 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, decorator_list=decorator_list) def start_group(self, group_name): - return NotImplemented + raise NotImplementedError @staticmethod def func_name(name): @@ -291,7 +291,7 @@ def report_summary(self): \x1b[0mfunctions") def post_gen(self): - return NotImplemented + raise NotImplementedError class DocGenerator(Generator): From 12183b88580517bd6867d4f239a44ea93969afd1 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 6 Jun 2024 16:26:26 +0800 Subject: [PATCH 023/151] [NFC] fix E0606: Possibly using variable 's_op2' before assignment Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/templates/binary_op_template.py | 1 + 1 file changed, 1 insertion(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py index 2232609b6..fa2223ada 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py @@ -61,6 +61,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): type_helper = TypeHelper(**args) + s_op2 = None if (op in ["mulhsu", "ssra", "sra"] and data_type == "uint") or \ (op in ["ssrl", "srl"] and data_type == "int"): # Unsigned mulhsu and ssra are unsupported, signed ssrl is unsupported From 88530011f83f5417058af4f9e6fd426ebdf78aca Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 6 Jun 2024 16:26:51 +0800 Subject: [PATCH 024/151] [NFC] fix E0606: Possibly using variable 'inst_type' before assignment Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/templates/misc_op_template.py | 1 + 1 file changed, 1 insertion(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py index 95b9a29ec..43b757b79 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py @@ -40,6 +40,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): break decorator.write_text_header(G) + inst_type = None for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): type_helper = TypeHelper(**args) if args["OP"] not in ["vundefined"]: From 502a5a6147fce991c0fb99c42148c976d7da8f1c Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Fri, 7 Jun 2024 07:01:07 -0700 Subject: [PATCH 025/151] Fix typo of vreinterpret in bfloat --- rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py index a0f4925fc..7f4a79b3d 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py @@ -130,7 +130,7 @@ def gen(g): g.start_group("BFloat16 Miscellaneous Vector Utility Intrinsics") g.function_group(reint_op_template, "Reinterpret Cast Conversion Intrinsics", - "reinterpret-cast-conversion", ["reinterpret"], "bfloat16", + "reinterpret-cast-conversion", ["vreinterpret"], "bfloat16", SEWS, LMULS, decorators.has_no_masking) g.function_group(misc_op_template, "Vector LMUL Extension Intrinsics", From 216c66a354a89ca715d38ee68f0f7bf0c4dd1826 Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Fri, 7 Jun 2024 07:01:41 -0700 Subject: [PATCH 026/151] [Auto-gen] Update bfloat16 tests under ../auto-generated. (make git-commit-autogen-bf16-test) --- auto-generated/bfloat16/llvm-api-tests/vcreate.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vget.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vle16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vle16ff.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vloxei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlse16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vluxei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vreinterpret.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vse16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vset.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsoxei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsse16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsuxei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c | 3 +-- auto-generated/bfloat16/llvm-api-tests/vundefined.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vget.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vle16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vse16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vset.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c | 3 +-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c | 3 +-- auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c | 3 +-- auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c | 3 +-- auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c | 3 +-- auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c | 3 +-- auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c | 3 +-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c | 3 +-- 248 files changed, 248 insertions(+), 496 deletions(-) diff --git a/auto-generated/bfloat16/llvm-api-tests/vcreate.c b/auto-generated/bfloat16/llvm-api-tests/vcreate.c index 5b5dab6a8..5bee817b5 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vcreate.c +++ b/auto-generated/bfloat16/llvm-api-tests/vcreate.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c index 4a81a4e71..1b63183e2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c index 0572244e5..14fed192d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c index c1ba47c29..e012e6146 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vget.c b/auto-generated/bfloat16/llvm-api-tests/vget.c index 61473a4ea..8c7096ec2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vget.c +++ b/auto-generated/bfloat16/llvm-api-tests/vget.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vle16.c b/auto-generated/bfloat16/llvm-api-tests/vle16.c index db5ed90dc..875873a93 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vle16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vle16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/llvm-api-tests/vle16ff.c index c1bd752af..7b9ca9702 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vle16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vle16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c b/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c index a9cc1c367..016db30c0 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c b/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c index 9bdca7bca..d0a0519a0 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxei16.c index a8e7d4dc5..d2e354548 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c index 31478d5a5..82390eb8e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c index a0e1c2eb3..24955edb0 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c index 0c3b9c66f..b12fb2fbb 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c index 99b75edcd..285f7f3be 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c index d700d64e2..f21a83835 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c index 218746d8d..f255edc8b 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c index 1e4fa305e..f7ee5d636 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/llvm-api-tests/vlse16.c index f9cb9fb39..8fc01c254 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlse16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c index 98770a402..23b147817 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c index 72f3d77cd..54a4edf98 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c index e7f6fd2d1..5a736ae11 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c index a71b248e6..9f99544e7 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c index 597738d92..9286edcae 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c index 1531d9221..ecbea325e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c index 1c894ce64..5640889e3 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c index 3672f2061..5991ba812 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c index 554ec2d93..70ff93569 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c index 419c4ab8e..9703905a7 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c index 42ffe0707..414cefbb7 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c index 8926f6553..972a04ba6 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c index fa7278cb6..d47c9997f 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c index 61accef57..a21065727 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c index cd13c384c..c751979eb 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c index 68f6d7be4..3079f01a4 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c index ae5296c76..521a65015 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c index 10f84f99a..d54014f0e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c index c39c63830..5a392a834 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c index 943c4a19e..bbde68805 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c index cda03cedf..2b071f52e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxei16.c index 0bacd5b35..100df4d39 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c index c54b55e80..db172b75e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c index 146f46088..fc0a1357c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c index 3bb62589b..a2c52c77f 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c index 42b121802..1ea269f4c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c index 3059d8cad..6036ec2fe 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c index 9ec7135aa..f12742e52 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c index 5fc52ed9a..9f83601f1 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c index b9143958e..df0b40f8b 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c +++ b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vse16.c b/auto-generated/bfloat16/llvm-api-tests/vse16.c index 1e38c43cc..322d8ad07 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vse16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vset.c b/auto-generated/bfloat16/llvm-api-tests/vset.c index b38841089..b8ca7d76d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vset.c +++ b/auto-generated/bfloat16/llvm-api-tests/vset.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c index 83f1fd347..451c88117 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c index c15bada5f..1c25a7306 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c index 65fcbd53e..540e4e68c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c index a19267e36..b57b09633 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c index 4ed520162..8f25b6940 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c index c7f6d2afb..5e01fdfea 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c index 1546a88f2..ab7ffcf52 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c index c507e28db..0cc2ca54c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsse16.c b/auto-generated/bfloat16/llvm-api-tests/vsse16.c index a066460f5..3d9833b87 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsse16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c index 2888efb67..3aadce048 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c index dffee040e..33bc2410c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c index 87e8309e8..cf651d433 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c index 2bcd2d84c..92f7b4b43 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c index 4520fc7eb..efdb7d290 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c index 293726b5e..633461d2a 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c index 7245244d8..0c4a8110c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c index 763b20621..c4949ac8d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c index fb8ff1d5b..ca7542e17 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c index f5d97c2cc..6e5fda871 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c index 9c4cef9d2..1baa6d6ff 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c index d0c431508..d358d0067 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c index 7a1f763b2..b63482059 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c index ee2b52988..4fc42633e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c index 67212ebfb..b756de688 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c index 43721fe78..6bff93d68 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c index 840e504bf..266ba9ab2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c index ad768d4ab..d284218bc 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c index 3cdf1d112..567c43a02 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c index 32d69a6a3..dff8843a5 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c index a041297a8..e4e9e86f7 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c index 61f23e3aa..f23db8ab0 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-api-tests/vundefined.c b/auto-generated/bfloat16/llvm-api-tests/vundefined.c index 0683dd94e..a5bf8ce56 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vundefined.c +++ b/auto-generated/bfloat16/llvm-api-tests/vundefined.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c index 4abf6b8b5..0dce388cd 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c index b9fd6c616..fe7ecb30d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c index beac2b32a..1177ef063 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vget.c b/auto-generated/bfloat16/llvm-overloaded-tests/vget.c index b29a6dcea..338c8c6d5 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vget.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vget.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c index 62d9a4461..8571e4566 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c index 16c0e3a6a..d7267eab2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c index d8b6216c7..2d8cb4387 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c index 826c0938c..4efaaf438 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c index 7a110728b..a42f9894d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c index c90723a64..bdb4e561c 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c index 932af6df8..cb767ad64 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c index 0248eceb9..bcd5508d9 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c index 0b6a5545a..4612825fb 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c index 15bab6e22..ce78b6255 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c index 35f3110eb..e491872f7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c index da2ae96f3..ef8b13a94 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c index 919bcd2f8..80023ae79 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c index 03050648a..612b250cc 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c index c793a9c54..0fd4bae02 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c index 49020d2a7..d215785e7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c index d70aad5d6..5b750a20a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c index 63c9ea79a..2863e915b 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c index 0d64c4f33..f4d7235d2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c index 75127d1eb..5eea8bba4 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c index c5a2a0154..b73e3c519 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c index e5fcc000a..76cda4f5f 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c index 685309270..f063eb6cc 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c index c00fb5fee..a09537ff6 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c index 2cf1a2a78..7455290f8 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c index eaf8cce70..3b41fd8fb 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c index 16fc33400..6d31e52db 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c index 42950ac4f..4eb1854a1 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c index 9f016d5ed..e16c9112e 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c index f712d7d7b..43958d69e 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c index 0add09a89..39163ff7d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c index 1b0d9eadd..c99545243 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c index 02ab02068..78db84c06 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c index 1b8457c0d..31c5ad044 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c index 96a8514fa..fe4609d5d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c index e0a2958f6..fceca59f1 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c index 9a4f56698..6f3335875 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c index b2dbece41..8193c2ec2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c index f2bff6a59..cf208dcb0 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c index bfb5959fe..72fcdf884 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c index 2ba7386ca..fdb447eb7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c index 4b0e1bb01..f73adf8a3 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c index 83d0af7de..d72de04e2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c index a87045603..e96601610 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vset.c b/auto-generated/bfloat16/llvm-overloaded-tests/vset.c index 849480e67..64c570d71 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vset.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vset.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c index 007b74642..53c2a50a2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c index 9a27bb5d2..1a50e3145 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c index 19cd07be9..d9f420ea8 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c index e21805337..5f6138034 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c index fa96e304d..1d7a8e6e5 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c index 3572e5116..bfec2ad54 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c index 7b99b6239..9748e3033 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c index 989430e15..b542d6c5b 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c index 98f228e23..e2f381ece 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c index 581644b69..0134205f2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c index d68d5d475..384f7bbe9 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c index edc7e396e..9c228977a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c index 7f59d47b4..d902e0fd8 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c index 5388501b2..c2b484a82 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c index 1d67708e0..58684aa5a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c index 2ca7b5488..73df0a904 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c index a923b682f..ab743e95c 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c index 4ceac5fc2..a590c8c22 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c index 46f7ad43f..08851d114 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c index 8d2c14637..2b971cad2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c index d73f4b713..6094a05fd 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c index d70ca8db5..547a4b544 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c index 98c949644..29bdfb2da 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c index 99298c79d..cb48003e1 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c index 6ef154030..f53fdd0a9 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c index a1c1c3435..45ba0a1e5 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c index c5f3c5cb4..90b589259 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c index 3c7d47dd3..34d92b164 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c index 942e4ff4f..e2110fcf8 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c index 68ec0a391..0c5fd93fb 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c index 838e6b721..f8e7c6613 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c index 83ce220b6..333494a48 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c index 8d122d46d..0e78ae270 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c index d75f88788..22817f534 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c index b3fb167d5..274053f2f 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c index 56c726802..ab26e32cb 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c index cfd980a4c..e25fba5e2 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c index 5c08a6033..0d70f67fa 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c index 7d0430ef0..d91d6f43c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c index a1efb7a50..6cf74ee4a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c index 4ab0d7765..01eadde37 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c index f70929941..320ae3aef 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c index 0acfa2ed3..385d8e2ce 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c index 9be5f86ee..6e6d31469 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c index b8a27eb29..07c9b1b7f 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c index 9b875c7a7..283ae4fa4 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c index a3fcb918e..5a1b7eb43 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c index 524c2e04d..9d1466934 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c index 4d7259c39..e8700e9c3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c index 911393203..b2e34cce5 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c index 9be138700..0dedf996f 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c index 1733dfada..1db782076 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c index 20ac32642..23adbd9ad 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c index 81aee1fb4..9e3ea7100 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c index 2f518e669..61863d533 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c index de7cc556d..ad74111ed 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c index 01d911246..dad750088 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c index 65cc25d2f..b79d35e72 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c index 5131aa333..3843bae0e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c index 945a45ea8..5ab99d3a8 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c index bc7156ecf..dc2616626 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c index 58d9f21a0..0cfb4ccc4 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c index 89f3d4ee5..0f8127fc6 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c index 923639412..90cbf219e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c index cfdf51e07..de961527d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c index 37c8ad0f3..85aa5df54 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c index ee779135d..53f04185c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c index ae282f833..93cfa2358 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c index 92d8c8e6e..2214b6480 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c index f5cb0229f..0cd291e45 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c index 8b30d05fb..bf222f0ca 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c index c882cb570..1cf082fad 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c index dd2a0c5bd..9c4c4aec8 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c index d7c2f914b..ecee70c35 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c index 57b08ec56..f7c8df74a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c index f3cf547d3..6af9f4bd2 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c index 7e97888b8..1a2b5d98c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c index b4d522202..1678cf93b 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c index 1606b1fbc..54cf2a926 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c index 030ec50fe..9acdddb89 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c index bfd06a7c9..48f32828e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c index bc2236ffe..1bb001134 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c index bf63f5b4d..9bf27d95c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c index e96465773..949463275 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c index 9e6ba4c51..218c4a634 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c index c928c645d..eafe8b250 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c index 9d84571b0..03c183a14 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c index e6ea668b7..d34157204 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c index b2dcd6244..6f5d9528a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c index 10ff7fa72..3615486da 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c index 99aafb0ea..4b8a5f935 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c index f60dc61d2..e79a0e026 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c index 6044ae82c..1414f0d6d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c index d84438041..894ff6b77 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c index e45b62947..997e027e7 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c index f2e2a29e8..e36634022 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c index 5a8553113..fdeff17bd 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c index 9cc747362..795ad0c5b 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c index 597884116..2e2cbb8a3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c index 5fe5b9d44..4ece05100 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c index dfbe9f89e..84c0e8569 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c index fa0322aae..9f4a07daa 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c index 976c21a17..5d8f0db54 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c index ed857c46d..4d274ff7e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c index 311d4477d..c2f9820f8 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c index a47eb41a5..fb02a1ece 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c index df9e43fd1..c1809f985 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c index e0dee0e84..114222500 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c index c41b7d405..9da589eb9 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c index 154bb4c04..9c9bc8053 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c index ec299a4a7..8d9943c0e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c index 8639b8a86..72336389a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c index be038080b..0c80732fb 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c index 86786793d..715e33139 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c index 252b00479..f41308f31 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c index 485d088be..9d444118e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c index b0e17955e..3a419dbea 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c @@ -1,6 +1,5 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ // RUN: -target-feature +experimental-zvfbfmin \ // RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ From ad200271181741acdbda8aa22112ccd80fbd23c3 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 16:10:36 +0800 Subject: [PATCH 027/151] [NFC] fix type-check errors by adding assertion Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/templates/binary_intcarry_template.py | 2 ++ .../rvv_intrinsic_gen/templates/binary_nop_template.py | 1 + .../rvv_intrinsic_gen/templates/binary_op_template.py | 1 + .../rvv_intrinsic_gen/templates/binary_wop_template.py | 1 + .../rvv_intrinsic_gen/templates/cmp_template.py | 1 + .../rvv_intrinsic_gen/templates/cvt_op_template.py | 1 + .../rvv_intrinsic_gen/templates/load_template.py | 1 + .../rvv_intrinsic_gen/templates/mac_template.py | 1 + .../rvv_intrinsic_gen/templates/mask_template.py | 2 ++ .../rvv_intrinsic_gen/templates/misc_op_template.py | 4 ++++ .../rvv_intrinsic_gen/templates/reduction_template.py | 1 + .../rvv_intrinsic_gen/templates/reint_op_template.py | 2 ++ .../rvv_intrinsic_gen/templates/seg_load_template.py | 1 + .../rvv_intrinsic_gen/templates/seg_store_template.py | 1 + .../rvv_intrinsic_gen/templates/store_template.py | 1 + 15 files changed, 21 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py index ce439a027..af00f7700 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py @@ -38,6 +38,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): s = type_helper.s m = type_helper.m + assert args["OP"] is not None args["OP"] = "v" + args["OP"] inst_info_vvm = InstInfo.get(args, decorator, InstType.VVVM) @@ -71,6 +72,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): s = type_helper.s m = type_helper.m + assert args["OP"] is not None args["OP"] = "v" + args["OP"] inst_info_vvm = InstInfo.get(args, None, InstType.VVVM) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py index 82e1f27a8..a83f3e1eb 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py @@ -45,6 +45,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): SEW=sew_list, LMUL=lmul_list, OP2=["v", "s"]): + assert args["OP"] is not None data_type = args["TYPE"] op = args["OP"] op2 = args["OP2"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py index fa2223ada..444d6e4c5 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py @@ -40,6 +40,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): SEW=sew_list, LMUL=lmul_list, OP2=["v", "s"]): + assert args["OP"] is not None data_type = args["TYPE"] op = args["OP"] sew = args["SEW"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py index b8a50d23f..f6bb93f87 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py @@ -33,6 +33,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): for decorator in decorator_list: decorator.write_text_header(G) for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): + assert args["OP"] is not None data_type = args["TYPE"] op = args["OP"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py index 8731d3744..7ad320038 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py @@ -38,6 +38,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): SEW=sew_list, LMUL=lmul_list, OP2=["v", "s"]): + assert args["OP"] is not None data_type = args["TYPE"] op = args["OP"] op2 = args["OP2"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py index 512a7fe75..12ac356ab 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py @@ -53,6 +53,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): ["float", "f", "float", "f"]] for args in prod( OP=op_list, SEW=sew_list, TYPES=convert_set, LMUL=lmul_list): + assert args["TYPES"] is not None op = args["OP"] type_helper = TypeHelper(**args) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py index 4d2529a6d..ee73dc9c0 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py @@ -37,6 +37,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): decorator.write_text_header(G) for args in prod( OP=op_list, TYPE=type_list, SEW=sew_list, EEW=sew_list, LMUL=lmul_list): + assert args["OP"] is not None op = args["OP"] sew = args["SEW"] eew = args["EEW"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py index da18f5c0a..36b274e1a 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py @@ -34,6 +34,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): for decorator in decorator_list: decorator.write_text_header(G) for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): + assert args["TYPE"] is not None data_type = args["TYPE"] op = args["OP"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py index c450033ae..91582c3bf 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py @@ -33,6 +33,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): decorator.write_text_header(G) # treat sew_list as MLEN for args in prod(OP=op_list, TYPE=type_list, MLEN=sew_list): + assert args["OP"] is not None op = args["OP"] if op not in ["cpop", "first"]: args["OP"] = "m" + args["OP"] @@ -94,6 +95,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): vl=type_helper.size_t) for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): + assert args["OP"] is not None op = args["OP"] type_helper = TypeHelper(**args) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py index 43b757b79..123f1bf11 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py @@ -43,6 +43,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): inst_type = None for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): type_helper = TypeHelper(**args) + inst_type = InstType.UNKNOWN if args["OP"] not in ["vundefined"]: break if args["TYPE"] == "float" and args["SEW"] == 8: @@ -73,6 +74,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): LMUL=lmul_list, NF=nf_list): type_helper = TypeHelper(**args) + inst_type = InstType.UNKNOWN if args["OP"] not in ["vundefined"]: break if args["TYPE"] == "float" and args["SEW"] == 8: @@ -94,6 +96,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): SEW=sew_list, LMUL=lmul_list, DST_LMUL=lmul_list): + assert args["TYPE"] is not None op = args["OP"] src_lmul = args["LMUL"] dst_lmul = args["DST_LMUL"] @@ -174,6 +177,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): SEW=sew_list, LMUL=lmul_list, NF=nf_list): + assert args["NF"] is not None type_helper = TypeHelper(**args) # This intrinsic appears after v0.12 diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py index 086fc425c..8d66fe4a9 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py @@ -34,6 +34,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): for decorator in decorator_list: decorator.write_text_header(G) for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): + assert args["OP"] is not None data_type = args["TYPE"] op = args["OP"] sew = args["SEW"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py index 452cec078..e10c5395a 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py @@ -50,6 +50,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): for args in prod( OP=op_list, SEW=sew_list, TYPES=convert_set, LMUL=lmul_list): sew = args["SEW"] + assert args["TYPES"] is not None type_helper = TypeHelper(**args) args["TYPES0"] = args["TYPES"][0] @@ -128,6 +129,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): convert_set = [["int", "i"], ["uint", "u"]] for args in prod( OP=op_list, SEW=sew_list, TYPES=convert_set, LMUL=lmul_list): + assert args["TYPES"] is not None type_helper = TypeHelper(**args) args["TYPES0"] = args["TYPES"][0] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py index 52691ea81..5741b680d 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py @@ -47,6 +47,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): EEW=sew_list, LMUL=lmul_list, NF=nf_list): + assert args["OP"] is not None op = args["OP"] nf = str(args["NF"]) sew = args["SEW"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py index 290ee4fac..a95f99fbd 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py @@ -47,6 +47,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): EEW=sew_list, LMUL=lmul_list, NF=nf_list): + assert args["OP"] is not None op = args["OP"] nf = str(args["NF"]) sew = args["SEW"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py index 98476e6ba..12d3136d2 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py @@ -36,6 +36,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): decorator.write_text_header(G) for args in prod( OP=op_list, TYPE=type_list, SEW=sew_list, EEW=sew_list, LMUL=lmul_list): + assert args["OP"] is not None op = args["OP"] sew = args["SEW"] eew = args["EEW"] From 61295e052e154e176d32db70cbb61c21e844f35e Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 17:57:32 +0800 Subject: [PATCH 028/151] DocGenerator: remove start_group call of base class Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 1 - 1 file changed, 1 deletion(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 46d0e23a2..02e9660fa 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -367,7 +367,6 @@ def start_group(self, group_name): # NOTE: If is_all_in_one is False, separate files of the grouped intrinsics # will be created, therefore we are allowing overriding the file descriptor # here. - super().start_group(group_name) if not self.is_all_in_one: file_name = f"{self.group_counter:02d}_{group_name}.adoc" file_name = file_name.replace(" ", "_") From b435de1c9d4d58cd82585a2b27a8ed5eb6f2beac Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 17:59:13 +0800 Subject: [PATCH 029/151] APITestGenerator: implement dummy start_group Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 02e9660fa..1655ab1a7 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -443,6 +443,9 @@ def __init__(self, f, is_overloaded, toolchain_type, has_tail_policy): # different op name self.test_file_names = [] + def start_group(self, group_name): + pass + def write_file_header(self, has_float_type, has_bfloat16_type): #pylint: disable=line-too-long int_llvm_header = r"""// REQUIRES: riscv-registered-target From 1d3b27932188b7289ed2a7a513ad4b08ceeedcb6 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 18:00:22 +0800 Subject: [PATCH 030/151] APITestGenerator: implement dummy inst_group_prologue Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 1655ab1a7..f5c28812e 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -446,6 +446,9 @@ def __init__(self, f, is_overloaded, toolchain_type, has_tail_policy): def start_group(self, group_name): pass + def inst_group_prologue(self): + return "" + def write_file_header(self, has_float_type, has_bfloat16_type): #pylint: disable=line-too-long int_llvm_header = r"""// REQUIRES: riscv-registered-target From 2427080b12e2df9113b850b902222f1c56f4b887 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 18:00:56 +0800 Subject: [PATCH 031/151] APITestGenerator: implement dummy inst_group_epilogue Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index f5c28812e..0dcc766f6 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -449,6 +449,9 @@ def start_group(self, group_name): def inst_group_prologue(self): return "" + def inst_group_epilogue(self): + return "" + def write_file_header(self, has_float_type, has_bfloat16_type): #pylint: disable=line-too-long int_llvm_header = r"""// REQUIRES: riscv-registered-target From 990d5c7b76085d2d6966155624879cf7dee6fed2 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 03:38:49 -0700 Subject: [PATCH 032/151] APITestGenerator: implement dummy write function Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 0dcc766f6..8c74bab64 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -443,6 +443,9 @@ def __init__(self, f, is_overloaded, toolchain_type, has_tail_policy): # different op name self.test_file_names = [] + def write(self, text): + pass + def start_group(self, group_name): pass From 9acdc084d093c403ba6715c6e83891021838b6e3 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 03:30:22 -0700 Subject: [PATCH 033/151] CompatibleHeaderGenerator: implement dummy start_group Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 8c74bab64..cd13de0c3 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -809,6 +809,9 @@ def gen_prologue(self): def write(self, text): self.fd.write(text) + def start_group(self, group_name): + pass + def function_group(self, template, title, link, op_list, type_list, sew_list, lmul_list, decorator_list): if self.has_tail_policy and len(decorator_list) == 0: From fac6b61645e8ac92d9d960e6095ca88541b68e95 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 03:30:49 -0700 Subject: [PATCH 034/151] CompatibleHeaderGenerator: implement dummy inst_group_prologue Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index cd13de0c3..f263f77d2 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -812,6 +812,9 @@ def write(self, text): def start_group(self, group_name): pass + def inst_group_prologue(self): + return "" + def function_group(self, template, title, link, op_list, type_list, sew_list, lmul_list, decorator_list): if self.has_tail_policy and len(decorator_list) == 0: From 751b7a3265fcd86843029ad7265367a6efbc7580 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 7 Jun 2024 03:31:09 -0700 Subject: [PATCH 035/151] CompatibleHeaderGenerator: implement dummy inst_group_epilogue Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index f263f77d2..1b3136298 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -815,6 +815,9 @@ def start_group(self, group_name): def inst_group_prologue(self): return "" + def inst_group_epilogue(self): + return "" + def function_group(self, template, title, link, op_list, type_list, sew_list, lmul_list, decorator_list): if self.has_tail_policy and len(decorator_list) == 0: From 2d709dcf495f36be071d6416df2aafad198c663c Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 11 Jun 2024 17:32:47 +0800 Subject: [PATCH 036/151] Let gen_prologue() pass execution by default Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 1b3136298..1f4f9ada9 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -43,7 +43,7 @@ def write_title(self, text, link): raise NotImplementedError def gen_prologue(self): - raise NotImplementedError + pass def inst_group_prologue(self): raise NotImplementedError From 894176bda6cd1168cc24fcc455407f3ea36b524f Mon Sep 17 00:00:00 2001 From: Craig Topper Date: Mon, 10 Jun 2024 14:12:22 -0700 Subject: [PATCH 037/151] Revert "Change return type for vcpop and vfirst to uint and int" This reverts commit 995cbb8c52ca1bd5cb96e4b9e34f9882170b909f. --- .../rvv_intrinsic_gen/templates/mask_template.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py index 91582c3bf..9e10bbbb1 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py @@ -72,7 +72,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): G.func( inst_info_m, name="{OP}_m_b{MLEN}".format_map(args) + decorator.func_suffix, - return_type=type_helper.uint, + return_type=type_helper.ulong, **decorator.mask_args(type_helper.m), vs2=type_helper.m, vl=type_helper.size_t) @@ -80,7 +80,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): G.func( inst_info_m, name="{OP}_m_b{MLEN}".format_map(args) + decorator.func_suffix, - return_type=type_helper.int, + return_type=type_helper.long, **decorator.mask_args(type_helper.m), vs2=type_helper.m, vl=type_helper.size_t) From 0f172cf8b6084ee4852edcc5a792bd8fe1a63a15 Mon Sep 17 00:00:00 2001 From: Craig Topper Date: Mon, 10 Jun 2024 14:13:15 -0700 Subject: [PATCH 038/151] Revert "[Auto-gen] Update documents under ../auto-generated. (make git-commit-autogen-doc)" This reverts commit ed8de04f2415599f5ebdd8ff940ee7b3c189d79f. --- auto-generated/intrinsic_funcs.adoc | 56 +++++++++---------- .../06_vector_mask_intrinsics.adoc | 56 +++++++++---------- .../overloaded_intrinsic_funcs.adoc | 56 +++++++++---------- .../06_vector_mask_intrinsics.adoc | 56 +++++++++---------- 4 files changed, 112 insertions(+), 112 deletions(-) diff --git a/auto-generated/intrinsic_funcs.adoc b/auto-generated/intrinsic_funcs.adoc index ef39a9c24..40ce7be66 100644 --- a/auto-generated/intrinsic_funcs.adoc +++ b/auto-generated/intrinsic_funcs.adoc @@ -48390,21 +48390,21 @@ vbool64_t __riscv_vmnot_m_b64(vbool64_t vs, size_t vl); [,c] ---- -unsigned int __riscv_vcpop_m_b1(vbool1_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b2(vbool2_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b4(vbool4_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b8(vbool8_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b16(vbool16_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b32(vbool32_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b64(vbool64_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b1(vbool1_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b2(vbool2_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b4(vbool4_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b8(vbool8_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b16(vbool16_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b32(vbool32_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b64(vbool64_t vs2, size_t vl); // masked functions -unsigned int __riscv_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); ---- [[vfirst-find-first-set-mask-bit]] @@ -48412,21 +48412,21 @@ unsigned int __riscv_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); [,c] ---- -int __riscv_vfirst_m_b1(vbool1_t vs2, size_t vl); -int __riscv_vfirst_m_b2(vbool2_t vs2, size_t vl); -int __riscv_vfirst_m_b4(vbool4_t vs2, size_t vl); -int __riscv_vfirst_m_b8(vbool8_t vs2, size_t vl); -int __riscv_vfirst_m_b16(vbool16_t vs2, size_t vl); -int __riscv_vfirst_m_b32(vbool32_t vs2, size_t vl); -int __riscv_vfirst_m_b64(vbool64_t vs2, size_t vl); +long __riscv_vfirst_m_b1(vbool1_t vs2, size_t vl); +long __riscv_vfirst_m_b2(vbool2_t vs2, size_t vl); +long __riscv_vfirst_m_b4(vbool4_t vs2, size_t vl); +long __riscv_vfirst_m_b8(vbool8_t vs2, size_t vl); +long __riscv_vfirst_m_b16(vbool16_t vs2, size_t vl); +long __riscv_vfirst_m_b32(vbool32_t vs2, size_t vl); +long __riscv_vfirst_m_b64(vbool64_t vs2, size_t vl); // masked functions -int __riscv_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl); -int __riscv_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl); -int __riscv_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl); -int __riscv_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl); -int __riscv_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl); -int __riscv_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl); -int __riscv_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); +long __riscv_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl); +long __riscv_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl); +long __riscv_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl); +long __riscv_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl); +long __riscv_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl); +long __riscv_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl); +long __riscv_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); ---- [[vmsbfm-set-before-first-mask-bit]] diff --git a/auto-generated/intrinsic_funcs/06_vector_mask_intrinsics.adoc b/auto-generated/intrinsic_funcs/06_vector_mask_intrinsics.adoc index a55682d08..12d8f777c 100644 --- a/auto-generated/intrinsic_funcs/06_vector_mask_intrinsics.adoc +++ b/auto-generated/intrinsic_funcs/06_vector_mask_intrinsics.adoc @@ -97,21 +97,21 @@ vbool64_t __riscv_vmnot_m_b64(vbool64_t vs, size_t vl); [,c] ---- -unsigned int __riscv_vcpop_m_b1(vbool1_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b2(vbool2_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b4(vbool4_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b8(vbool8_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b16(vbool16_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b32(vbool32_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b64(vbool64_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b1(vbool1_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b2(vbool2_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b4(vbool4_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b8(vbool8_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b16(vbool16_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b32(vbool32_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b64(vbool64_t vs2, size_t vl); // masked functions -unsigned int __riscv_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl); -unsigned int __riscv_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl); +unsigned long __riscv_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); ---- [[vfirst-find-first-set-mask-bit]] @@ -119,21 +119,21 @@ unsigned int __riscv_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); [,c] ---- -int __riscv_vfirst_m_b1(vbool1_t vs2, size_t vl); -int __riscv_vfirst_m_b2(vbool2_t vs2, size_t vl); -int __riscv_vfirst_m_b4(vbool4_t vs2, size_t vl); -int __riscv_vfirst_m_b8(vbool8_t vs2, size_t vl); -int __riscv_vfirst_m_b16(vbool16_t vs2, size_t vl); -int __riscv_vfirst_m_b32(vbool32_t vs2, size_t vl); -int __riscv_vfirst_m_b64(vbool64_t vs2, size_t vl); +long __riscv_vfirst_m_b1(vbool1_t vs2, size_t vl); +long __riscv_vfirst_m_b2(vbool2_t vs2, size_t vl); +long __riscv_vfirst_m_b4(vbool4_t vs2, size_t vl); +long __riscv_vfirst_m_b8(vbool8_t vs2, size_t vl); +long __riscv_vfirst_m_b16(vbool16_t vs2, size_t vl); +long __riscv_vfirst_m_b32(vbool32_t vs2, size_t vl); +long __riscv_vfirst_m_b64(vbool64_t vs2, size_t vl); // masked functions -int __riscv_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl); -int __riscv_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl); -int __riscv_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl); -int __riscv_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl); -int __riscv_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl); -int __riscv_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl); -int __riscv_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); +long __riscv_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl); +long __riscv_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl); +long __riscv_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl); +long __riscv_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl); +long __riscv_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl); +long __riscv_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl); +long __riscv_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl); ---- [[vmsbfm-set-before-first-mask-bit]] diff --git a/auto-generated/overloaded_intrinsic_funcs.adoc b/auto-generated/overloaded_intrinsic_funcs.adoc index c0185119a..5e71ee363 100644 --- a/auto-generated/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/overloaded_intrinsic_funcs.adoc @@ -38718,21 +38718,21 @@ vbool64_t __riscv_vmnot(vbool64_t vs, size_t vl); [,c] ---- -unsigned int __riscv_vcpop(vbool1_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool2_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool4_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool8_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool16_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool32_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool64_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool1_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool2_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool4_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool8_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool16_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool32_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool64_t vs2, size_t vl); // masked functions -unsigned int __riscv_vcpop(vbool1_t vm, vbool1_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool2_t vm, vbool2_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool4_t vm, vbool4_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool8_t vm, vbool8_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool16_t vm, vbool16_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool32_t vm, vbool32_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool64_t vm, vbool64_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool1_t vm, vbool1_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool2_t vm, vbool2_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool4_t vm, vbool4_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool8_t vm, vbool8_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool16_t vm, vbool16_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool32_t vm, vbool32_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool64_t vm, vbool64_t vs2, size_t vl); ---- [[overloaded-vfirst-find-first-set-mask-bit]] @@ -38740,21 +38740,21 @@ unsigned int __riscv_vcpop(vbool64_t vm, vbool64_t vs2, size_t vl); [,c] ---- -int __riscv_vfirst(vbool1_t vs2, size_t vl); -int __riscv_vfirst(vbool2_t vs2, size_t vl); -int __riscv_vfirst(vbool4_t vs2, size_t vl); -int __riscv_vfirst(vbool8_t vs2, size_t vl); -int __riscv_vfirst(vbool16_t vs2, size_t vl); -int __riscv_vfirst(vbool32_t vs2, size_t vl); -int __riscv_vfirst(vbool64_t vs2, size_t vl); +long __riscv_vfirst(vbool1_t vs2, size_t vl); +long __riscv_vfirst(vbool2_t vs2, size_t vl); +long __riscv_vfirst(vbool4_t vs2, size_t vl); +long __riscv_vfirst(vbool8_t vs2, size_t vl); +long __riscv_vfirst(vbool16_t vs2, size_t vl); +long __riscv_vfirst(vbool32_t vs2, size_t vl); +long __riscv_vfirst(vbool64_t vs2, size_t vl); // masked functions -int __riscv_vfirst(vbool1_t vm, vbool1_t vs2, size_t vl); -int __riscv_vfirst(vbool2_t vm, vbool2_t vs2, size_t vl); -int __riscv_vfirst(vbool4_t vm, vbool4_t vs2, size_t vl); -int __riscv_vfirst(vbool8_t vm, vbool8_t vs2, size_t vl); -int __riscv_vfirst(vbool16_t vm, vbool16_t vs2, size_t vl); -int __riscv_vfirst(vbool32_t vm, vbool32_t vs2, size_t vl); -int __riscv_vfirst(vbool64_t vm, vbool64_t vs2, size_t vl); +long __riscv_vfirst(vbool1_t vm, vbool1_t vs2, size_t vl); +long __riscv_vfirst(vbool2_t vm, vbool2_t vs2, size_t vl); +long __riscv_vfirst(vbool4_t vm, vbool4_t vs2, size_t vl); +long __riscv_vfirst(vbool8_t vm, vbool8_t vs2, size_t vl); +long __riscv_vfirst(vbool16_t vm, vbool16_t vs2, size_t vl); +long __riscv_vfirst(vbool32_t vm, vbool32_t vs2, size_t vl); +long __riscv_vfirst(vbool64_t vm, vbool64_t vs2, size_t vl); ---- [[overloaded-vmsbfm-set-before-first-mask-bit]] diff --git a/auto-generated/overloaded_intrinsic_funcs/06_vector_mask_intrinsics.adoc b/auto-generated/overloaded_intrinsic_funcs/06_vector_mask_intrinsics.adoc index 2cb224185..99968b005 100644 --- a/auto-generated/overloaded_intrinsic_funcs/06_vector_mask_intrinsics.adoc +++ b/auto-generated/overloaded_intrinsic_funcs/06_vector_mask_intrinsics.adoc @@ -83,21 +83,21 @@ vbool64_t __riscv_vmnot(vbool64_t vs, size_t vl); [,c] ---- -unsigned int __riscv_vcpop(vbool1_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool2_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool4_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool8_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool16_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool32_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool64_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool1_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool2_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool4_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool8_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool16_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool32_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool64_t vs2, size_t vl); // masked functions -unsigned int __riscv_vcpop(vbool1_t vm, vbool1_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool2_t vm, vbool2_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool4_t vm, vbool4_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool8_t vm, vbool8_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool16_t vm, vbool16_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool32_t vm, vbool32_t vs2, size_t vl); -unsigned int __riscv_vcpop(vbool64_t vm, vbool64_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool1_t vm, vbool1_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool2_t vm, vbool2_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool4_t vm, vbool4_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool8_t vm, vbool8_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool16_t vm, vbool16_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool32_t vm, vbool32_t vs2, size_t vl); +unsigned long __riscv_vcpop(vbool64_t vm, vbool64_t vs2, size_t vl); ---- [[overloaded-vfirst-find-first-set-mask-bit]] @@ -105,21 +105,21 @@ unsigned int __riscv_vcpop(vbool64_t vm, vbool64_t vs2, size_t vl); [,c] ---- -int __riscv_vfirst(vbool1_t vs2, size_t vl); -int __riscv_vfirst(vbool2_t vs2, size_t vl); -int __riscv_vfirst(vbool4_t vs2, size_t vl); -int __riscv_vfirst(vbool8_t vs2, size_t vl); -int __riscv_vfirst(vbool16_t vs2, size_t vl); -int __riscv_vfirst(vbool32_t vs2, size_t vl); -int __riscv_vfirst(vbool64_t vs2, size_t vl); +long __riscv_vfirst(vbool1_t vs2, size_t vl); +long __riscv_vfirst(vbool2_t vs2, size_t vl); +long __riscv_vfirst(vbool4_t vs2, size_t vl); +long __riscv_vfirst(vbool8_t vs2, size_t vl); +long __riscv_vfirst(vbool16_t vs2, size_t vl); +long __riscv_vfirst(vbool32_t vs2, size_t vl); +long __riscv_vfirst(vbool64_t vs2, size_t vl); // masked functions -int __riscv_vfirst(vbool1_t vm, vbool1_t vs2, size_t vl); -int __riscv_vfirst(vbool2_t vm, vbool2_t vs2, size_t vl); -int __riscv_vfirst(vbool4_t vm, vbool4_t vs2, size_t vl); -int __riscv_vfirst(vbool8_t vm, vbool8_t vs2, size_t vl); -int __riscv_vfirst(vbool16_t vm, vbool16_t vs2, size_t vl); -int __riscv_vfirst(vbool32_t vm, vbool32_t vs2, size_t vl); -int __riscv_vfirst(vbool64_t vm, vbool64_t vs2, size_t vl); +long __riscv_vfirst(vbool1_t vm, vbool1_t vs2, size_t vl); +long __riscv_vfirst(vbool2_t vm, vbool2_t vs2, size_t vl); +long __riscv_vfirst(vbool4_t vm, vbool4_t vs2, size_t vl); +long __riscv_vfirst(vbool8_t vm, vbool8_t vs2, size_t vl); +long __riscv_vfirst(vbool16_t vm, vbool16_t vs2, size_t vl); +long __riscv_vfirst(vbool32_t vm, vbool32_t vs2, size_t vl); +long __riscv_vfirst(vbool64_t vm, vbool64_t vs2, size_t vl); ---- [[overloaded-vmsbfm-set-before-first-mask-bit]] From 72aac686f7027090bcffe032c78f213b3052dacf Mon Sep 17 00:00:00 2001 From: Craig Topper Date: Mon, 10 Jun 2024 14:16:08 -0700 Subject: [PATCH 039/151] Revert "[Auto-gen] Update tests under ../auto-generated. (make git-commit-autogen-test)" This reverts commit 26057454d3e25b1ae82b07bcbc53deda0790a1c4. --- auto-generated/api-testing/vcpop.c | 28 +++++++++---------- auto-generated/api-testing/vfirst.c | 28 +++++++++---------- auto-generated/gnu-api-tests/vcpop.c | 28 +++++++++---------- auto-generated/gnu-api-tests/vfirst.c | 28 +++++++++---------- auto-generated/gnu-overloaded-tests/vcpop.c | 28 +++++++++---------- auto-generated/gnu-overloaded-tests/vfirst.c | 28 +++++++++---------- auto-generated/llvm-api-tests/vcpop.c | 28 +++++++++---------- auto-generated/llvm-api-tests/vfirst.c | 28 +++++++++---------- auto-generated/llvm-overloaded-tests/vcpop.c | 28 +++++++++---------- auto-generated/llvm-overloaded-tests/vfirst.c | 28 +++++++++---------- auto-generated/overloaded-api-testing/vcpop.c | 28 +++++++++---------- .../overloaded-api-testing/vfirst.c | 28 +++++++++---------- 12 files changed, 168 insertions(+), 168 deletions(-) diff --git a/auto-generated/api-testing/vcpop.c b/auto-generated/api-testing/vcpop.c index 06e64c836..11f2e55ac 100644 --- a/auto-generated/api-testing/vcpop.c +++ b/auto-generated/api-testing/vcpop.c @@ -1,58 +1,58 @@ #include #include -unsigned int test_vcpop_m_b1(vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vcpop_m_b1(vs2, vl); } -unsigned int test_vcpop_m_b2(vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vcpop_m_b2(vs2, vl); } -unsigned int test_vcpop_m_b4(vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vcpop_m_b4(vs2, vl); } -unsigned int test_vcpop_m_b8(vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vcpop_m_b8(vs2, vl); } -unsigned int test_vcpop_m_b16(vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vcpop_m_b16(vs2, vl); } -unsigned int test_vcpop_m_b32(vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vcpop_m_b32(vs2, vl); } -unsigned int test_vcpop_m_b64(vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vcpop_m_b64(vs2, vl); } -unsigned int test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vcpop_m_b1_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vcpop_m_b2_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vcpop_m_b4_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vcpop_m_b8_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vcpop_m_b16_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vcpop_m_b32_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vcpop_m_b64_m(vm, vs2, vl); } diff --git a/auto-generated/api-testing/vfirst.c b/auto-generated/api-testing/vfirst.c index 508967af6..96e72970f 100644 --- a/auto-generated/api-testing/vfirst.c +++ b/auto-generated/api-testing/vfirst.c @@ -1,58 +1,58 @@ #include #include -int test_vfirst_m_b1(vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vfirst_m_b1(vs2, vl); } -int test_vfirst_m_b2(vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vfirst_m_b2(vs2, vl); } -int test_vfirst_m_b4(vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vfirst_m_b4(vs2, vl); } -int test_vfirst_m_b8(vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vfirst_m_b8(vs2, vl); } -int test_vfirst_m_b16(vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vfirst_m_b16(vs2, vl); } -int test_vfirst_m_b32(vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vfirst_m_b32(vs2, vl); } -int test_vfirst_m_b64(vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vfirst_m_b64(vs2, vl); } -int test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vfirst_m_b1_m(vm, vs2, vl); } -int test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vfirst_m_b2_m(vm, vs2, vl); } -int test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vfirst_m_b4_m(vm, vs2, vl); } -int test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vfirst_m_b8_m(vm, vs2, vl); } -int test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vfirst_m_b16_m(vm, vs2, vl); } -int test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vfirst_m_b32_m(vm, vs2, vl); } -int test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vfirst_m_b64_m(vm, vs2, vl); } diff --git a/auto-generated/gnu-api-tests/vcpop.c b/auto-generated/gnu-api-tests/vcpop.c index 7ebc6140c..038d03754 100644 --- a/auto-generated/gnu-api-tests/vcpop.c +++ b/auto-generated/gnu-api-tests/vcpop.c @@ -3,59 +3,59 @@ #include -unsigned int test_vcpop_m_b1(vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vcpop_m_b1(vs2, vl); } -unsigned int test_vcpop_m_b2(vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vcpop_m_b2(vs2, vl); } -unsigned int test_vcpop_m_b4(vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vcpop_m_b4(vs2, vl); } -unsigned int test_vcpop_m_b8(vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vcpop_m_b8(vs2, vl); } -unsigned int test_vcpop_m_b16(vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vcpop_m_b16(vs2, vl); } -unsigned int test_vcpop_m_b32(vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vcpop_m_b32(vs2, vl); } -unsigned int test_vcpop_m_b64(vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vcpop_m_b64(vs2, vl); } -unsigned int test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vcpop_m_b1_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vcpop_m_b2_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vcpop_m_b4_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vcpop_m_b8_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vcpop_m_b16_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vcpop_m_b32_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vcpop_m_b64_m(vm, vs2, vl); } /* { dg-final { scan-assembler-times {vseti?vli\s+[a-z0-9]+,\s*[a-z0-9]+,\s*e[0-9]+,\s*mf?[1248],\s*t[au],\s*m[au]\s+vcpop\.[ivxfswum.]+\s+} 14 } } */ diff --git a/auto-generated/gnu-api-tests/vfirst.c b/auto-generated/gnu-api-tests/vfirst.c index e2de33c9b..37a4a8f15 100644 --- a/auto-generated/gnu-api-tests/vfirst.c +++ b/auto-generated/gnu-api-tests/vfirst.c @@ -3,59 +3,59 @@ #include -int test_vfirst_m_b1(vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vfirst_m_b1(vs2, vl); } -int test_vfirst_m_b2(vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vfirst_m_b2(vs2, vl); } -int test_vfirst_m_b4(vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vfirst_m_b4(vs2, vl); } -int test_vfirst_m_b8(vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vfirst_m_b8(vs2, vl); } -int test_vfirst_m_b16(vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vfirst_m_b16(vs2, vl); } -int test_vfirst_m_b32(vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vfirst_m_b32(vs2, vl); } -int test_vfirst_m_b64(vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vfirst_m_b64(vs2, vl); } -int test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vfirst_m_b1_m(vm, vs2, vl); } -int test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vfirst_m_b2_m(vm, vs2, vl); } -int test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vfirst_m_b4_m(vm, vs2, vl); } -int test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vfirst_m_b8_m(vm, vs2, vl); } -int test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vfirst_m_b16_m(vm, vs2, vl); } -int test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vfirst_m_b32_m(vm, vs2, vl); } -int test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vfirst_m_b64_m(vm, vs2, vl); } /* { dg-final { scan-assembler-times {vseti?vli\s+[a-z0-9]+,\s*[a-z0-9]+,\s*e[0-9]+,\s*mf?[1248],\s*t[au],\s*m[au]\s+vfirst\.[ivxfswum.]+\s+} 14 } } */ diff --git a/auto-generated/gnu-overloaded-tests/vcpop.c b/auto-generated/gnu-overloaded-tests/vcpop.c index c0e42e89a..b6bb5a6fb 100644 --- a/auto-generated/gnu-overloaded-tests/vcpop.c +++ b/auto-generated/gnu-overloaded-tests/vcpop.c @@ -3,59 +3,59 @@ #include -unsigned int test_vcpop_m_b1(vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b2(vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b4(vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b8(vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b16(vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b32(vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b64(vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } /* { dg-final { scan-assembler-times {vseti?vli\s+[a-z0-9]+,\s*[a-z0-9]+,\s*e[0-9]+,\s*mf?[1248],\s*t[au],\s*m[au]\s+vcpop\.[ivxfswum.]+\s+} 14 } } */ diff --git a/auto-generated/gnu-overloaded-tests/vfirst.c b/auto-generated/gnu-overloaded-tests/vfirst.c index ea2a7b730..e215b4fcf 100644 --- a/auto-generated/gnu-overloaded-tests/vfirst.c +++ b/auto-generated/gnu-overloaded-tests/vfirst.c @@ -3,59 +3,59 @@ #include -int test_vfirst_m_b1(vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b2(vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b4(vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b8(vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b16(vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b32(vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b64(vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } /* { dg-final { scan-assembler-times {vseti?vli\s+[a-z0-9]+,\s*[a-z0-9]+,\s*e[0-9]+,\s*mf?[1248],\s*t[au],\s*m[au]\s+vfirst\.[ivxfswum.]+\s+} 14 } } */ diff --git a/auto-generated/llvm-api-tests/vcpop.c b/auto-generated/llvm-api-tests/vcpop.c index 215a2443e..1c0ff63ca 100644 --- a/auto-generated/llvm-api-tests/vcpop.c +++ b/auto-generated/llvm-api-tests/vcpop.c @@ -6,58 +6,58 @@ #include -unsigned int test_vcpop_m_b1(vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vcpop_m_b1(vs2, vl); } -unsigned int test_vcpop_m_b2(vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vcpop_m_b2(vs2, vl); } -unsigned int test_vcpop_m_b4(vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vcpop_m_b4(vs2, vl); } -unsigned int test_vcpop_m_b8(vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vcpop_m_b8(vs2, vl); } -unsigned int test_vcpop_m_b16(vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vcpop_m_b16(vs2, vl); } -unsigned int test_vcpop_m_b32(vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vcpop_m_b32(vs2, vl); } -unsigned int test_vcpop_m_b64(vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vcpop_m_b64(vs2, vl); } -unsigned int test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vcpop_m_b1_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vcpop_m_b2_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vcpop_m_b4_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vcpop_m_b8_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vcpop_m_b16_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vcpop_m_b32_m(vm, vs2, vl); } -unsigned int test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vcpop_m_b64_m(vm, vs2, vl); } diff --git a/auto-generated/llvm-api-tests/vfirst.c b/auto-generated/llvm-api-tests/vfirst.c index e0c651c03..770afdb2d 100644 --- a/auto-generated/llvm-api-tests/vfirst.c +++ b/auto-generated/llvm-api-tests/vfirst.c @@ -5,58 +5,58 @@ #include -int test_vfirst_m_b1(vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vfirst_m_b1(vs2, vl); } -int test_vfirst_m_b2(vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vfirst_m_b2(vs2, vl); } -int test_vfirst_m_b4(vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vfirst_m_b4(vs2, vl); } -int test_vfirst_m_b8(vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vfirst_m_b8(vs2, vl); } -int test_vfirst_m_b16(vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vfirst_m_b16(vs2, vl); } -int test_vfirst_m_b32(vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vfirst_m_b32(vs2, vl); } -int test_vfirst_m_b64(vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vfirst_m_b64(vs2, vl); } -int test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vfirst_m_b1_m(vm, vs2, vl); } -int test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vfirst_m_b2_m(vm, vs2, vl); } -int test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vfirst_m_b4_m(vm, vs2, vl); } -int test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vfirst_m_b8_m(vm, vs2, vl); } -int test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vfirst_m_b16_m(vm, vs2, vl); } -int test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vfirst_m_b32_m(vm, vs2, vl); } -int test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vfirst_m_b64_m(vm, vs2, vl); } diff --git a/auto-generated/llvm-overloaded-tests/vcpop.c b/auto-generated/llvm-overloaded-tests/vcpop.c index 1b3ed7bec..1735b9838 100644 --- a/auto-generated/llvm-overloaded-tests/vcpop.c +++ b/auto-generated/llvm-overloaded-tests/vcpop.c @@ -6,58 +6,58 @@ #include -unsigned int test_vcpop_m_b1(vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b2(vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b4(vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b8(vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b16(vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b32(vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b64(vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } diff --git a/auto-generated/llvm-overloaded-tests/vfirst.c b/auto-generated/llvm-overloaded-tests/vfirst.c index 481e07716..c8f2b1b12 100644 --- a/auto-generated/llvm-overloaded-tests/vfirst.c +++ b/auto-generated/llvm-overloaded-tests/vfirst.c @@ -5,58 +5,58 @@ #include -int test_vfirst_m_b1(vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b2(vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b4(vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b8(vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b16(vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b32(vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b64(vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } diff --git a/auto-generated/overloaded-api-testing/vcpop.c b/auto-generated/overloaded-api-testing/vcpop.c index f52439d1f..aeeeeb878 100644 --- a/auto-generated/overloaded-api-testing/vcpop.c +++ b/auto-generated/overloaded-api-testing/vcpop.c @@ -1,58 +1,58 @@ #include #include -unsigned int test_vcpop_m_b1(vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b2(vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b4(vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b8(vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b16(vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b32(vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b64(vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); } -unsigned int test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +unsigned long test_vcpop_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +unsigned long test_vcpop_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +unsigned long test_vcpop_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +unsigned long test_vcpop_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +unsigned long test_vcpop_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +unsigned long test_vcpop_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } -unsigned int test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +unsigned long test_vcpop_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vcpop(vm, vs2, vl); } diff --git a/auto-generated/overloaded-api-testing/vfirst.c b/auto-generated/overloaded-api-testing/vfirst.c index 72fdc0a2a..aa45333f4 100644 --- a/auto-generated/overloaded-api-testing/vfirst.c +++ b/auto-generated/overloaded-api-testing/vfirst.c @@ -1,58 +1,58 @@ #include #include -int test_vfirst_m_b1(vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1(vbool1_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b2(vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2(vbool2_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b4(vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4(vbool4_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b8(vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8(vbool8_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b16(vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16(vbool16_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b32(vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32(vbool32_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b64(vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64(vbool64_t vs2, size_t vl) { return __riscv_vfirst(vs2, vl); } -int test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { +long test_vfirst_m_b1_m(vbool1_t vm, vbool1_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { +long test_vfirst_m_b2_m(vbool2_t vm, vbool2_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { +long test_vfirst_m_b4_m(vbool4_t vm, vbool4_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { +long test_vfirst_m_b8_m(vbool8_t vm, vbool8_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { +long test_vfirst_m_b16_m(vbool16_t vm, vbool16_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { +long test_vfirst_m_b32_m(vbool32_t vm, vbool32_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } -int test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { +long test_vfirst_m_b64_m(vbool64_t vm, vbool64_t vs2, size_t vl) { return __riscv_vfirst(vm, vs2, vl); } From e343feb0597a771bc2fb529f4717cf042194b14b Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 02:58:55 -0700 Subject: [PATCH 040/151] [Makefile] Add interface for providing more flags into the generator Signed-off-by: eop Chen --- rvv-intrinsic-generator/Makefile | 57 ++++++++++++-------------------- 1 file changed, 22 insertions(+), 35 deletions(-) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index 5044f51ff..31de080e5 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -55,6 +55,8 @@ MAIN := rvv_intrinsic_gen.main BF16_INST := $(RVV_INTRINSIC_GEN_PATH)/bfloat16_inst.py # Script to clang-format the auto-generated adoc files CLANG_FORMAT_ADOC = clang_format_autogen +# Extra flags specified when calling rvv_intrinsic_gen.main +EXTRA_FLAG := # Main output directory is default to auto-generated OUTPUT_DIR := ../auto-generated # Derives output directory for each set of intrinsics @@ -164,50 +166,38 @@ gen-gnu-test: gnu-overloaded-test gnu-non-overloaded-test # Generate all-in-one document for non-overloaded intrinsics non-overloaded-doc: - $(call gen_doc,$(DIR),intrinsic_funcs.adoc,$@,) - $(call gen_doc,$(POLICY_DIR),intrinsic_funcs.adoc,$@,--has-policy) - $(call clang_format_adoc, --file, $(DIR)/intrinsic_funcs.adoc) - $(call clang_format_adoc, --file, $(POLICY_DIR)/intrinsic_funcs.adoc) + $(call gen_doc,$(DIR),intrinsic_funcs.md,$@,$(EXTRA_FLAG)) + $(call gen_doc,$(POLICY_DIR),intrinsic_funcs.md,$@,--has-policy $(EXTRA_FLAG)) # Generate grouped documents for non-overloaded intrinsics non-overloaded-docs: - $(call gen_docs,$(DIR),intrinsic_funcs,$@,) - $(call gen_docs,$(POLICY_DIR),intrinsic_funcs,$@,--has-policy) - $(call clang_format_adoc, --folder, $(DIR)/intrinsic_funcs) - $(call clang_format_adoc, --folder, $(POLICY_DIR)/intrinsic_funcs) + $(call gen_docs,$(DIR),intrinsic_funcs,$@,$(EXTRA_FLAG)) + $(call gen_docs,$(POLICY_DIR),intrinsic_funcs,$@,--has-policy $(EXTRA_FLAG)) # Generate all-in-one document for overloaded intrinsics overloaded-doc: - $(call gen_doc,$(DIR),overloaded_intrinsic_funcs.adoc,$@,) - $(call gen_doc,$(POLICY_DIR),overloaded_intrinsic_funcs.adoc,$@,--has-policy) - $(call clang_format_adoc, --file, $(DIR)/overloaded_intrinsic_funcs.adoc) - $(call clang_format_adoc, --file, $(POLICY_DIR)/overloaded_intrinsic_funcs.adoc) + $(call gen_doc,$(DIR),overloaded_intrinsic_funcs.md,$@,$(EXTRA_FLAG)) + $(call gen_doc,$(POLICY_DIR),overloaded_intrinsic_funcs.md,$@,--has-policy $(EXTRA_FLAG)) # Generate grouped documents for overloaded intrinsics overloaded-docs: - $(call gen_docs,$(DIR),overloaded_intrinsic_funcs,$@,) - $(call gen_docs,$(POLICY_DIR),overloaded_intrinsic_funcs,$@,--has-policy) - $(call clang_format_adoc, --folder, $(DIR)/overloaded_intrinsic_funcs) - $(call clang_format_adoc, --folder, $(POLICY_DIR)/overloaded_intrinsic_funcs) + $(call gen_docs,$(DIR),overloaded_intrinsic_funcs,$@,$(EXTRA_FLAG)) + $(call gen_docs,$(POLICY_DIR),overloaded_intrinsic_funcs,$@,--has-policy $(EXTRA_FLAG)) # Generate non-overloaded intrinsic testing C source files non-overloaded-test: - $(call gen_tests,$(DIR)/api-testing,non-overloaded-test,) - $(call gen_tests,$(POLICY_DIR)/api-testing,non-overloaded-test,--has-policy) - clang-format -i $(DIR)/api-testing/* - clang-format -i $(POLICY_DIR)/api-testing/* + $(call gen_tests,$(DIR)/api-testing,non-overloaded-test,$(EXTRA_FLAG)) + $(call gen_tests,$(POLICY_DIR)/api-testing,non-overloaded-test,--has-policy $(EXTRA_FLAG)) # Generate overloaded intrinsic testing C source files overloaded-test: - $(call gen_tests,$(DIR)/overloaded-api-testing,overloaded-test,) - $(call gen_tests,$(POLICY_DIR)/overloaded-api-testing,overloaded-test,--has-policy) - clang-format -i $(DIR)/overloaded-api-testing/* - clang-format -i $(POLICY_DIR)/overloaded-api-testing/* + $(call gen_tests,$(DIR)/overloaded-api-testing,overloaded-test,$(EXTRA_FLAG)) + $(call gen_tests,$(POLICY_DIR)/overloaded-api-testing,overloaded-test,--has-policy $(EXTRA_FLAG)) # Generate non-overloaded intrinsic testing C source files llvm-non-overloaded-test: - $(call gen_tests,$(DIR)/llvm-api-tests,non-overloaded-test,--toolchain-type llvm) - $(call gen_tests,$(POLICY_DIR)/llvm-api-tests,non-overloaded-test,--toolchain-type llvm --has-policy) + $(call gen_tests,$(DIR)/llvm-api-tests,non-overloaded-test,--toolchain-type llvm $(EXTRA_FLAG)) + $(call gen_tests,$(POLICY_DIR)/llvm-api-tests,non-overloaded-test,--toolchain-type llvm --has-policy $(EXTRA_FLAG)) $(call replace_float, $(DIR)/llvm-api-tests) $(call replace_float, $(POLICY_DIR)/llvm-api-tests) clang-format -i $(DIR)/llvm-api-tests/* @@ -215,8 +205,8 @@ llvm-non-overloaded-test: # Generate overloaded intrinsic testing C source files llvm-overloaded-test: - $(call gen_tests,$(DIR)/llvm-overloaded-tests,overloaded-test,--toolchain-type llvm) - $(call gen_tests,$(POLICY_DIR)/llvm-overloaded-tests,overloaded-test,--toolchain-type llvm --has-policy) + $(call gen_tests,$(DIR)/llvm-overloaded-tests,overloaded-test,--toolchain-type llvm $(EXTRA_FLAG)) + $(call gen_tests,$(POLICY_DIR)/llvm-overloaded-tests,overloaded-test,--toolchain-type llvm --has-policy $(EXTRA_FLAG)) $(call replace_float, $(DIR)/llvm-overloaded-tests) $(call replace_float, $(POLICY_DIR)/llvm-overloaded-tests) clang-format -i $(DIR)/llvm-overloaded-tests/* @@ -292,18 +282,15 @@ bf16-llvm-overloaded-test: # Generate the adaptor header for v0.10 non-policy-compatible-header: - $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,non-policy.h,non-overloaded-compatible-header,) + $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,non-policy.h,non-overloaded-compatible-header,$(EXTRA_FLAG)) policy-compatible-header: - $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,policy.h,non-overloaded-compatible-header,--has-policy) - clang-format -i $(DIR)/rvv-v0p10-compatible-headers/* + $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,policy.h,non-overloaded-compatible-header,--has-policy $(EXTRA_FLAG)) non-policy-overloaded-compatible-header: - $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,overloaded-non-policy.h,overloaded-compatible-header,) - clang-format -i $(DIR)/rvv-v0p10-compatible-headers/* + $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,overloaded-non-policy.h,overloaded-compatible-header,$(EXTRA_FLAG)) policy-overloaded-compatible-header: - $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,overloaded-policy.h,overloaded-compatible-header,--has-policy) - clang-format -i $(DIR)/rvv-v0p10-compatible-headers/* + $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,overloaded-policy.h,overloaded-compatible-header,--has-policy $(EXTRA_FLAG)) ############################################################################### From f3f34d32ff3051a1d7cb75181c6fe72e63516d25 Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 03:36:39 -0700 Subject: [PATCH 041/151] [vector-crypto] Define intrinsics for the Zvbb extension Signed-off-by: eop Chen --- .../rvv_intrinsic_gen/constants.py | 1 + .../rvv_intrinsic_gen/main.py | 18 ++- .../templates/vector_crypto_template.py | 104 ++++++++++++++++++ .../rvv_intrinsic_gen/vector_crypto_inst.py | 65 +++++++++++ 4 files changed, 186 insertions(+), 2 deletions(-) create mode 100644 rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py create mode 100644 rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/constants.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/constants.py index e2ae21964..5d3f20c6c 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/constants.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/constants.py @@ -28,6 +28,7 @@ NSEWS = [16, 32, 64] TYPES = ["float", "int", "uint"] ITYPES = ["int", "uint"] +UITYPE = ["uint"] FTYPES = ["float"] MTYPES = ["bool"] MLENS = [1, 2, 4, 8, 16, 32, 64] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/main.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/main.py index f9b84daf1..fe0205d1b 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/main.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/main.py @@ -24,6 +24,7 @@ import importlib.util import inspect import inst +import vector_crypto_inst import generator from enums import ToolChainType @@ -104,6 +105,7 @@ class GenTypes: parser.add_argument("--skip-default-inst", default=False, action="store_true") parser.add_argument("--vendor-generator-script") parser.add_argument("--vendor-generator-name") + parser.add_argument("--gen-vector-crypto", default=False, action="store_true") parser.add_argument("--out") args = parser.parse_args() @@ -137,6 +139,12 @@ class GenTypes: GenTypes.NON_OVERLOADED_COMPATIBLE_HEADER, GenTypes.OVERLOADED_COMPATIBLE_HEADER ]: + # Vector crypto does not need compatible header because we don't have + # them before v0.10 + if mode in (GenTypes.NON_OVERLOADED_COMPATIBLE_HEADER, + GenTypes.OVERLOADED_COMPATIBLE_HEADER) and\ + args.gen_vector_crypto: + return with open(args.out, "w", encoding="utf-8") as f: if mode == GenTypes.NON_OVERLOADED_DOC: g = generator.DocGenerator(f, True, args.has_policy) @@ -150,7 +158,10 @@ class GenTypes: assert False if not args.skip_default_inst: - inst.gen(g) + if args.gen_vector_crypto: + vector_crypto_inst.gen(g) + else: + inst.gen(g) else: print("Skipping default RVV instructions (--skip-default-inst)") if vendor_gen is not None: @@ -173,7 +184,10 @@ class GenTypes: else: assert False if not args.skip_default_inst: - inst.gen(g) + if args.gen_vector_crypto: + vector_crypto_inst.gen(g) + else: + inst.gen(g) else: print("Skipping default RVV instructions (--skip-default-inst)") if vendor_gen is not None: diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py new file mode 100644 index 000000000..9574bbbb8 --- /dev/null +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -0,0 +1,104 @@ +""" +Template for rendering vector crypto intrinsics. +Current version is for v20230531. +https://github.com/riscv/riscv-crypto/blob/v20230531/doc/vector/riscv-crypto-spec-vector.adoc +""" + +from utils import prod +from utils import TypeHelper +from enums import InstInfo +from enums import InstType +from enums import ExtraAttr + +operand_mnemonic_dict = {} +# Zvbb: Vector Bit-manipulation used in Cryptography +operand_mnemonic_dict["vandn"] = ["vv", "vx"] +operand_mnemonic_dict["vbrev"] = ["v"] +operand_mnemonic_dict["vbrev8"] = ["v"] +operand_mnemonic_dict["vrev8"] = ["v"] +operand_mnemonic_dict["vclz"] = ["v"] +operand_mnemonic_dict["vctz"] = ["v"] +operand_mnemonic_dict["vcpop"] = ["v"] +operand_mnemonic_dict["vrol"] = ["vv", "vx"] +operand_mnemonic_dict["vror"] = ["vv", "vx"] # saving the `vi` variant +operand_mnemonic_dict["vwsll"] = ["vv", "vx"] # saving the `vi` variant + + +def has_vs1_input(name): + has_vs1_input_inst_set = {"vandn", "vrol", "vror", "vwsll"} + + return name in has_vs1_input_inst_set + + +def has_rs1_input(name): + has_rs1_input_inst_set = {"vandn", "vrol", "vror", "vwsll"} + + return name in has_rs1_input_inst_set + + +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): + #pylint: disable=invalid-name + # FIXME: Renaming 'G' to 'g' all in once later. + G.inst_group_prologue() + + for decorator in decorator_list: + decorator.write_text_header(G) + for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): + op = args["OP"] + for operand_mnemonic in operand_mnemonic_dict[op]: + if operand_mnemonic in ("vv", "vs"): + if op == "vwsll": + inst_info = InstInfo.get(args, decorator, InstType.WVV, + ExtraAttr.NO_ATTR) + else: + inst_info = InstInfo.get(args, decorator, InstType.VV, + ExtraAttr.NO_ATTR) + elif operand_mnemonic == "vx": + if op == "vwsll": + inst_info = InstInfo.get(args, decorator, InstType.WVX, + ExtraAttr.NO_ATTR) + else: + inst_info = InstInfo.get(args, decorator, InstType.VX, + ExtraAttr.NO_ATTR) + elif operand_mnemonic == "v": + inst_info = InstInfo.get(args, decorator, InstType.V, + ExtraAttr.NO_ATTR) + else: + assert False, "Unreachable, unrecognized mnemonic" + + args["MNEMONIC"] = operand_mnemonic + type_helper = TypeHelper(**args) + kwargs = {} + if op == "vwsll": + kwargs["return_type"] = type_helper.wv + else: + kwargs["return_type"] = type_helper.v + kwargs = {**kwargs, **decorator.mask_args(type_helper.m, type_helper.v)} + if op == "vwsll": + kwargs = {**kwargs, **decorator.tu_dest_args(type_helper.wv)} + else: + kwargs = {**kwargs, **decorator.tu_dest_args(type_helper.v)} + + kwargs["vs2"] = type_helper.v + + if operand_mnemonic == "vv" and has_vs1_input(op): + kwargs["vs1"] = type_helper.v + if operand_mnemonic == "vx" and has_rs1_input(op): + if op in ["vwsll", "vrol", "vror"]: + kwargs["rs1"] = type_helper.size_t + else: + kwargs["rs1"] = type_helper.s + + kwargs["vl"] = type_helper.size_t + + if op == "vwsll": + args["SEW"] = args["WSEW"] + args["LMUL"] = args["WLMUL"] + + G.func( + inst_info, + name="{OP}_{MNEMONIC}_{TYPE}{SEW}m{LMUL}".format_map(args) + + decorator.func_suffix, + **kwargs) + + G.inst_group_epilogue() diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py new file mode 100644 index 000000000..5cfe60263 --- /dev/null +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -0,0 +1,65 @@ +""" +Declares the vector crypto intrinsics through the vector crypto template. +""" + +from intrinsic_decorator import IntrinsicDecorators +from templates import vector_crypto_template +from constants import LMULS, WLMULS, SEWS, WSEWS, UITYPE + + +def gen(g): + decorators = IntrinsicDecorators(g.has_tail_policy) + + g.start_group("Zvbb - Vector Bit-manipulation used in Cryptography") + + g.function_group( + vector_crypto_template, + "Vector Bit-manipulation used in Cryptography - Bitwise And-Not", + "", # FIXME: We probably have a separate document for vector-crypto + ["vandn"], + UITYPE, + SEWS, + LMULS, + decorators.has_masking_maskedoff_policy) + + g.function_group( + vector_crypto_template, + "Vector Bit-manipulation used in Cryptography - Reverse Bits", + "", # FIXME: We probably have a separate document for vector-crypto + ["vbrev", "vbrev8", "vrev8"], + UITYPE, + SEWS, + LMULS, + decorators.has_masking_maskedoff_policy) + + g.function_group( + vector_crypto_template, + "Vector Bit-manipulation used in Cryptography - Count Bits", + "", # FIXME: We probably have a separate document for vector-crypto + ["vclz", "vctz"], + UITYPE, + SEWS, + LMULS, + decorators.has_masking_no_maskedoff) + + g.function_group( + vector_crypto_template, + "Vector Bit-manipulation used in Cryptography - Rotate", + "", # FIXME: We probably have a separate document for vector-crypto + ["vrol", "vror"], + UITYPE, + SEWS, + LMULS, + decorators.has_masking_maskedoff_policy) + + g.function_group( + vector_crypto_template, + "Vector Bit-manipulation used in Cryptography - Shift", + "", # FIXME: We probably have a separate document for vector-crypto + ["vwsll"], + UITYPE, + WSEWS, + WLMULS, + decorators.has_masking_maskedoff_policy) + + #################################################################### From 7bd20f7bf616f167cb347af2ce45460481cd85fe Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 10:59:47 -0700 Subject: [PATCH 042/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 581 +++++++++++ ...r_bit-manipulation_used_in_cryptography.md | 581 +++++++++++ .../overloaded_intrinsic_funcs.md | 581 +++++++++++ ...r_bit-manipulation_used_in_cryptography.md | 581 +++++++++++ .../policy_funcs/intrinsic_funcs.md | 953 ++++++++++++++++++ ...r_bit-manipulation_used_in_cryptography.md | 953 ++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 953 ++++++++++++++++++ ...r_bit-manipulation_used_in_cryptography.md | 953 ++++++++++++++++++ 8 files changed, 6136 insertions(+) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs.md create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md new file mode 100644 index 000000000..80f99bfc5 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -0,0 +1,581 @@ + +## Zvbb - Vector Bit-manipulation used in Cryptography: + +### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vandn_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8 (vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4 (vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2 (vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1 (vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2 (vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4 (vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8 (vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4 (vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2 (vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1 (vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2 (vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4 (vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8 (vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2 (vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1 (vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2 (vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4 (vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8 (vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vbrev_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Count Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vclz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Rotate](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vrol_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Shift](): + +**Prototypes:** +``` C +vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md new file mode 100644 index 000000000..80f99bfc5 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -0,0 +1,581 @@ + +## Zvbb - Vector Bit-manipulation used in Cryptography: + +### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vandn_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8 (vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4 (vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2 (vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1 (vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2 (vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4 (vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8 (vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4 (vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2 (vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1 (vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2 (vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4 (vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8 (vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2 (vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1 (vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2 (vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4 (vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8 (vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vbrev_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Count Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vclz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Rotate](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vrol_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Shift](): + +**Prototypes:** +``` C +vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md new file mode 100644 index 000000000..d4d9ea35a --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -0,0 +1,581 @@ + +## Zvbb - Vector Bit-manipulation used in Cryptography: + +### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn (vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn (vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn (vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn (vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn (vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn (vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn (vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn (vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn (vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn (vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn (vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn (vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn (vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn (vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn (vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn (vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn (vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn (vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn (vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn (vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn (vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn (vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn (vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn (vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn (vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn (vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn (vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn (vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn (vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn (vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vbrev (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8 (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8 (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8 (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8 (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8 (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8 (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8 (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8 (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8 (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8 (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8 (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8 (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8 (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8 (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8 (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8 (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8 (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8 (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8 (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8 (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8 (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8 (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8 (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8 (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8 (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8 (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8 (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8 (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8 (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8 (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8 (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8 (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8 (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8 (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8 (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8 (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8 (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8 (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8 (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8 (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8 (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8 (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8 (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8 (vbool8_t mask, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Count Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vclz (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz (vbool8_t mask, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Rotate](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Shift](): + +**Prototypes:** +``` C +vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md new file mode 100644 index 000000000..d4d9ea35a --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -0,0 +1,581 @@ + +## Zvbb - Vector Bit-manipulation used in Cryptography: + +### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn (vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn (vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn (vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn (vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn (vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn (vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn (vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn (vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn (vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn (vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn (vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn (vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn (vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn (vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn (vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn (vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn (vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn (vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn (vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn (vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn (vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn (vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn (vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn (vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn (vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn (vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn (vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn (vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn (vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn (vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vbrev (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8 (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8 (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8 (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8 (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8 (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8 (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8 (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8 (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8 (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8 (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8 (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8 (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8 (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8 (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8 (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8 (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8 (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8 (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8 (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8 (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8 (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8 (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8 (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8 (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8 (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8 (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8 (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8 (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8 (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8 (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8 (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8 (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8 (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8 (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8 (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8 (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8 (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8 (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8 (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8 (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8 (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8 (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8 (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8 (vbool8_t mask, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Count Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vclz (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz (vbool8_t mask, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz (vbool64_t mask, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz (vbool32_t mask, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz (vbool16_t mask, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz (vbool8_t mask, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz (vbool4_t mask, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz (vbool2_t mask, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz (vbool1_t mask, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz (vbool64_t mask, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz (vbool32_t mask, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz (vbool16_t mask, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz (vbool8_t mask, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz (vbool4_t mask, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz (vbool2_t mask, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz (vbool64_t mask, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz (vbool32_t mask, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz (vbool16_t mask, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz (vbool8_t mask, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz (vbool4_t mask, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz (vbool64_t mask, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz (vbool32_t mask, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz (vbool16_t mask, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz (vbool8_t mask, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Rotate](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Shift](): + +**Prototypes:** +``` C +vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md new file mode 100644 index 000000000..f5ef93699 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -0,0 +1,953 @@ + +## Zvbb - Vector Bit-manipulation used in Cryptography: + +### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vandn_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vbrev_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Count Bits](): +This operation don't have Policy Intrinsic Functions. + +### [Vector Bit-manipulation used in Cryptography - Rotate](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vrol_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Shift](): + +**Prototypes:** +``` C +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md new file mode 100644 index 000000000..f5ef93699 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -0,0 +1,953 @@ + +## Zvbb - Vector Bit-manipulation used in Cryptography: + +### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vandn_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vbrev_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Count Bits](): +This operation don't have Policy Intrinsic Functions. + +### [Vector Bit-manipulation used in Cryptography - Rotate](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vrol_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Shift](): + +**Prototypes:** +``` C +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md new file mode 100644 index 000000000..c94663c42 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -0,0 +1,953 @@ + +## Zvbb - Vector Bit-manipulation used in Cryptography: + +### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vbrev_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Count Bits](): +This operation don't have Policy Intrinsic Functions. + +### [Vector Bit-manipulation used in Cryptography - Rotate](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Shift](): + +**Prototypes:** +``` C +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md new file mode 100644 index 000000000..c94663c42 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -0,0 +1,953 @@ + +## Zvbb - Vector Bit-manipulation used in Cryptography: + +### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vbrev_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Count Bits](): +This operation don't have Policy Intrinsic Functions. + +### [Vector Bit-manipulation used in Cryptography - Rotate](): + +**Prototypes:** +``` C +vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +``` + +### [Vector Bit-manipulation used in Cryptography - Shift](): + +**Prototypes:** +``` C +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +``` From f3a0d678b040776c1ea266fd004d5989e780351e Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 10:59:48 -0700 Subject: [PATCH 043/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vandn.c | 358 +++++++++ .../vector-crypto/api-testing/vbrev.c | 182 +++++ .../vector-crypto/api-testing/vbrev8.c | 182 +++++ .../vector-crypto/api-testing/vclz.c | 182 +++++ .../vector-crypto/api-testing/vctz.c | 182 +++++ .../vector-crypto/api-testing/vrev8.c | 182 +++++ .../vector-crypto/api-testing/vrol.c | 358 +++++++++ .../vector-crypto/api-testing/vror.c | 358 +++++++++ .../vector-crypto/api-testing/vwsll.c | 246 ++++++ .../vector-crypto/llvm-api-tests/vandn.c | 359 +++++++++ .../vector-crypto/llvm-api-tests/vbrev.c | 183 +++++ .../vector-crypto/llvm-api-tests/vbrev8.c | 183 +++++ .../vector-crypto/llvm-api-tests/vclz.c | 183 +++++ .../vector-crypto/llvm-api-tests/vctz.c | 183 +++++ .../vector-crypto/llvm-api-tests/vrev8.c | 183 +++++ .../vector-crypto/llvm-api-tests/vrol.c | 359 +++++++++ .../vector-crypto/llvm-api-tests/vror.c | 359 +++++++++ .../vector-crypto/llvm-api-tests/vwsll.c | 247 ++++++ .../llvm-overloaded-tests/vandn.c | 359 +++++++++ .../llvm-overloaded-tests/vbrev.c | 183 +++++ .../llvm-overloaded-tests/vbrev8.c | 183 +++++ .../llvm-overloaded-tests/vclz.c | 183 +++++ .../llvm-overloaded-tests/vctz.c | 183 +++++ .../llvm-overloaded-tests/vrev8.c | 183 +++++ .../llvm-overloaded-tests/vrol.c | 359 +++++++++ .../llvm-overloaded-tests/vror.c | 359 +++++++++ .../llvm-overloaded-tests/vwsll.c | 247 ++++++ .../overloaded-api-testing/vandn.c | 358 +++++++++ .../overloaded-api-testing/vbrev.c | 182 +++++ .../overloaded-api-testing/vbrev8.c | 182 +++++ .../overloaded-api-testing/vclz.c | 182 +++++ .../overloaded-api-testing/vctz.c | 182 +++++ .../overloaded-api-testing/vrev8.c | 182 +++++ .../overloaded-api-testing/vrol.c | 358 +++++++++ .../overloaded-api-testing/vror.c | 358 +++++++++ .../overloaded-api-testing/vwsll.c | 246 ++++++ .../policy_funcs/api-testing/vandn.c | 710 +++++++++++++++++ .../policy_funcs/api-testing/vbrev.c | 358 +++++++++ .../policy_funcs/api-testing/vbrev8.c | 358 +++++++++ .../policy_funcs/api-testing/vrev8.c | 358 +++++++++ .../policy_funcs/api-testing/vrol.c | 710 +++++++++++++++++ .../policy_funcs/api-testing/vror.c | 710 +++++++++++++++++ .../policy_funcs/api-testing/vwsll.c | 486 ++++++++++++ .../policy_funcs/llvm-api-tests/vandn.c | 711 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vbrev.c | 359 +++++++++ .../policy_funcs/llvm-api-tests/vbrev8.c | 359 +++++++++ .../policy_funcs/llvm-api-tests/vrev8.c | 359 +++++++++ .../policy_funcs/llvm-api-tests/vrol.c | 711 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vror.c | 711 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vwsll.c | 487 ++++++++++++ .../llvm-overloaded-tests/vandn.c | 711 ++++++++++++++++++ .../llvm-overloaded-tests/vbrev.c | 359 +++++++++ .../llvm-overloaded-tests/vbrev8.c | 359 +++++++++ .../llvm-overloaded-tests/vrev8.c | 359 +++++++++ .../policy_funcs/llvm-overloaded-tests/vrol.c | 711 ++++++++++++++++++ .../policy_funcs/llvm-overloaded-tests/vror.c | 711 ++++++++++++++++++ .../llvm-overloaded-tests/vwsll.c | 487 ++++++++++++ .../overloaded-api-testing/vandn.c | 710 +++++++++++++++++ .../overloaded-api-testing/vbrev.c | 358 +++++++++ .../overloaded-api-testing/vbrev8.c | 358 +++++++++ .../overloaded-api-testing/vrev8.c | 358 +++++++++ .../overloaded-api-testing/vrol.c | 710 +++++++++++++++++ .../overloaded-api-testing/vror.c | 710 +++++++++++++++++ .../overloaded-api-testing/vwsll.c | 486 ++++++++++++ 64 files changed, 23712 insertions(+) create mode 100644 auto-generated/vector-crypto/api-testing/vandn.c create mode 100644 auto-generated/vector-crypto/api-testing/vbrev.c create mode 100644 auto-generated/vector-crypto/api-testing/vbrev8.c create mode 100644 auto-generated/vector-crypto/api-testing/vclz.c create mode 100644 auto-generated/vector-crypto/api-testing/vctz.c create mode 100644 auto-generated/vector-crypto/api-testing/vrev8.c create mode 100644 auto-generated/vector-crypto/api-testing/vrol.c create mode 100644 auto-generated/vector-crypto/api-testing/vror.c create mode 100644 auto-generated/vector-crypto/api-testing/vwsll.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vandn.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vbrev.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vbrev8.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vclz.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vctz.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vrev8.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vrol.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vror.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vwsll.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vror.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vandn.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vbrev.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vclz.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vctz.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vrev8.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vrol.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vror.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vwsll.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vror.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vandn.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev8.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrev8.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrol.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vror.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c diff --git a/auto-generated/vector-crypto/api-testing/vandn.c b/auto-generated/vector-crypto/api-testing/vandn.c new file mode 100644 index 000000000..50ca46138 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vandn.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8(vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8(vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4(vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4(vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2(vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2(vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1(vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1(vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2(vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2(vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4(vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4(vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8(vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8(vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4(vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4(vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2(vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2(vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1(vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1(vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2(vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2(vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4(vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4(vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8(vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8(vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2(vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2(vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1(vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1(vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2(vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2(vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4(vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4(vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8(vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8(vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8(vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_m(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_m(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_m(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_m(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_m(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_m(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_m(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_m(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_m(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_m(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_m(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_m(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_m(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_m(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_m(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_m(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_m(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_m(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_m(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_m(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_m(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_m(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_m(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_m(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_m(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_m(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_m(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_m(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_m(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_m(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_m(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_m(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_m(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_m(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_m(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_m(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vbrev.c b/auto-generated/vector-crypto/api-testing/vbrev.c new file mode 100644 index 000000000..97d4855ac --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vbrev.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vbrev8.c b/auto-generated/vector-crypto/api-testing/vbrev8.c new file mode 100644 index 000000000..323154304 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vbrev8.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vclz.c b/auto-generated/vector-crypto/api-testing/vclz.c new file mode 100644 index 000000000..655af1c63 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vclz.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vctz.c b/auto-generated/vector-crypto/api-testing/vctz.c new file mode 100644 index 000000000..262e6be9b --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vctz.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vrev8.c b/auto-generated/vector-crypto/api-testing/vrev8.c new file mode 100644 index 000000000..9d2ea220c --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vrev8.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vrol.c b/auto-generated/vector-crypto/api-testing/vrol.c new file mode 100644 index 000000000..41fdc7637 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vrol.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8(vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8(vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4(vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4(vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2(vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2(vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1(vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1(vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2(vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2(vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4(vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4(vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8(vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8(vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4(vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4(vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2(vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2(vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1(vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1(vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2(vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2(vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4(vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4(vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8(vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8(vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2(vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2(vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1(vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1(vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2(vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2(vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4(vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4(vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8(vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8(vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8(vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_m(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_m(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_m(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_m(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_m(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_m(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_m(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_m(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_m(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_m(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_m(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_m(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_m(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_m(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_m(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_m(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_m(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_m(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_m(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_m(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_m(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_m(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_m(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_m(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_m(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_m(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_m(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_m(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_m(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_m(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_m(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_m(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_m(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_m(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_m(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_m(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vror.c b/auto-generated/vector-crypto/api-testing/vror.c new file mode 100644 index 000000000..c00b0b98e --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vror.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8(vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8(vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4(vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4(vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2(vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2(vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1(vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1(vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2(vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2(vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4(vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4(vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8(vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8(vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4(vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4(vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2(vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2(vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1(vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1(vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2(vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2(vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4(vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4(vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8(vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8(vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2(vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2(vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1(vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1(vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2(vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2(vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4(vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4(vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8(vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8(vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8(vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_m(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_m(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_m(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_m(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_m(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_m(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_m(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_m(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_m(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_m(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_m(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_m(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_m(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_m(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_m(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_m(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_m(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_m(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_m(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_m(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_m(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_m(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_m(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_m(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_m(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_m(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_m(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_m(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_m(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_m(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_m(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_m(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_m(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_m(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_m(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_m(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vwsll.c b/auto-generated/vector-crypto/api-testing/vwsll.c new file mode 100644 index 000000000..a36e5a3c6 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vwsll.c @@ -0,0 +1,246 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4(vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2(vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2(vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1(vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1(vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2(vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2(vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4(vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4(vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8(vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8(vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2(vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1(vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1(vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2(vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2(vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4(vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4(vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8(vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8(vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8(vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/llvm-api-tests/vandn.c new file mode 100644 index 000000000..ac15e471b --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vandn.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8(vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8(vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4(vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4(vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2(vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2(vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1(vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1(vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2(vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2(vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4(vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4(vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8(vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8(vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4(vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4(vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2(vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2(vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1(vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1(vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2(vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2(vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4(vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4(vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8(vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8(vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2(vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2(vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1(vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1(vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2(vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2(vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4(vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4(vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8(vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8(vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8(vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_m(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_m(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_m(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_m(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_m(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_m(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_m(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_m(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_m(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_m(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_m(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_m(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_m(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_m(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_m(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_m(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_m(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_m(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_m(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_m(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_m(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_m(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_m(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_m(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_m(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_m(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_m(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_m(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_m(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_m(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_m(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_m(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_m(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_m(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_m(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_m(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/llvm-api-tests/vbrev.c new file mode 100644 index 000000000..26c4de404 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vbrev.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c new file mode 100644 index 000000000..d22110c4f --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclz.c b/auto-generated/vector-crypto/llvm-api-tests/vclz.c new file mode 100644 index 000000000..9ce26f56f --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vclz.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vctz.c b/auto-generated/vector-crypto/llvm-api-tests/vctz.c new file mode 100644 index 000000000..504efd27a --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vctz.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/llvm-api-tests/vrev8.c new file mode 100644 index 000000000..f5d49ee05 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vrev8.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_m(mask, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_m(mask, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_m(mask, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_m(mask, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_m(mask, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_m(mask, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_m(mask, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_m(mask, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_m(mask, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_m(mask, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_m(mask, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_m(mask, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_m(mask, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_m(mask, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_m(mask, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_m(mask, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_m(mask, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_m(mask, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_m(mask, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_m(mask, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_m(mask, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_m(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/llvm-api-tests/vrol.c new file mode 100644 index 000000000..1154de852 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vrol.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8(vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8(vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4(vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4(vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2(vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2(vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1(vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1(vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2(vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2(vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4(vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4(vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8(vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8(vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4(vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4(vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2(vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2(vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1(vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1(vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2(vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2(vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4(vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4(vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8(vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8(vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2(vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2(vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1(vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1(vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2(vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2(vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4(vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4(vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8(vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8(vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8(vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_m(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_m(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_m(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_m(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_m(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_m(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_m(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_m(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_m(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_m(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_m(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_m(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_m(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_m(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_m(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_m(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_m(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_m(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_m(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_m(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_m(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_m(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_m(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_m(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_m(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_m(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_m(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_m(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_m(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_m(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_m(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_m(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_m(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_m(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_m(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_m(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vror.c b/auto-generated/vector-crypto/llvm-api-tests/vror.c new file mode 100644 index 000000000..694b6e0e0 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vror.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8(vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8(vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4(vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4(vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2(vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2(vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1(vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1(vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2(vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2(vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4(vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4(vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8(vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8(vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4(vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4(vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2(vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2(vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1(vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1(vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2(vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2(vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4(vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4(vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8(vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8(vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2(vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2(vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1(vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1(vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2(vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2(vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4(vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4(vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8(vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8(vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8(vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_m(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_m(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_m(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_m(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_m(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_m(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_m(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_m(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_m(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_m(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_m(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_m(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_m(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_m(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_m(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_m(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_m(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_m(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_m(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_m(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_m(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_m(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_m(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_m(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_m(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_m(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_m(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_m(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_m(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_m(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_m(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_m(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_m(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_m(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_m(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_m(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c new file mode 100644 index 000000000..ca3fdaa23 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c @@ -0,0 +1,247 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4(vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2(vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2(vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1(vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1(vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2(vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2(vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4(vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4(vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8(vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8(vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2(vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1(vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1(vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2(vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2(vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4(vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4(vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8(vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8(vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8(vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c new file mode 100644 index 000000000..e2894d7e4 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c new file mode 100644 index 000000000..0c95750c7 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c new file mode 100644 index 000000000..d94465fe5 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c new file mode 100644 index 000000000..f1da0ff12 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c new file mode 100644 index 000000000..2dc00bb3f --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c new file mode 100644 index 000000000..72738a4c7 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c @@ -0,0 +1,183 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c new file mode 100644 index 000000000..51fab3b0c --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c new file mode 100644 index 000000000..f5439c7ab --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c new file mode 100644 index 000000000..f739b1cd3 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c @@ -0,0 +1,247 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vandn.c b/auto-generated/vector-crypto/overloaded-api-testing/vandn.c new file mode 100644 index 000000000..e744cd9fe --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vandn.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn(vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c b/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c new file mode 100644 index 000000000..8c82c5496 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev(vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c b/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c new file mode 100644 index 000000000..5785a810f --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8(vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclz.c b/auto-generated/vector-crypto/overloaded-api-testing/vclz.c new file mode 100644 index 000000000..8bea51126 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclz.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vclz(vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vctz.c b/auto-generated/vector-crypto/overloaded-api-testing/vctz.c new file mode 100644 index 000000000..86090d8aa --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vctz.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vctz(vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c b/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c new file mode 100644 index 000000000..d013b9218 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c @@ -0,0 +1,182 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8(vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8(mask, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vrol.c b/auto-generated/vector-crypto/overloaded-api-testing/vrol.c new file mode 100644 index 000000000..dda6195ca --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vrol.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol(vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vror.c b/auto-generated/vector-crypto/overloaded-api-testing/vror.c new file mode 100644 index 000000000..600fc1d66 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vror.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror(vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c b/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c new file mode 100644 index 000000000..c0e0521ff --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c @@ -0,0 +1,246 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll(vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c new file mode 100644 index 000000000..6cdb97418 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c @@ -0,0 +1,710 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c new file mode 100644 index 000000000..f4a0371a9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c new file mode 100644 index 000000000..d9a0c3cc2 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c new file mode 100644 index 000000000..e5a425b5f --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c new file mode 100644 index 000000000..a023644e3 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c @@ -0,0 +1,710 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c new file mode 100644 index 000000000..c94ef3774 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c @@ -0,0 +1,710 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c new file mode 100644 index 000000000..a99acee03 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c @@ -0,0 +1,486 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c new file mode 100644 index 000000000..cdd2befb5 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c @@ -0,0 +1,711 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c new file mode 100644 index 000000000..fd694cc5a --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c new file mode 100644 index 000000000..1f0433554 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c new file mode 100644 index 000000000..69737f009 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c new file mode 100644 index 000000000..088f1363a --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c @@ -0,0 +1,711 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c new file mode 100644 index 000000000..b7ea078c6 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c @@ -0,0 +1,711 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c new file mode 100644 index 000000000..21b1bc7e8 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c @@ -0,0 +1,487 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c new file mode 100644 index 000000000..298fa71e9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c @@ -0,0 +1,711 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c new file mode 100644 index 000000000..c0a7edfac --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c new file mode 100644 index 000000000..fda1416d8 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c new file mode 100644 index 000000000..264f15a6b --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c @@ -0,0 +1,359 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c new file mode 100644 index 000000000..a34a5be23 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c @@ -0,0 +1,711 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c new file mode 100644 index 000000000..5a7dad772 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c @@ -0,0 +1,711 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c new file mode 100644 index 000000000..6f9409182 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c @@ -0,0 +1,487 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vandn.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vandn.c new file mode 100644 index 000000000..1ecbcdfa9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vandn.c @@ -0,0 +1,710 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev.c new file mode 100644 index 000000000..c0c9eb726 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev8.c new file mode 100644 index 000000000..c375826e5 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev8.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrev8.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrev8.c new file mode 100644 index 000000000..55f6bf42e --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrev8.c @@ -0,0 +1,358 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +} + +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrol.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrol.c new file mode 100644 index 000000000..8b7154ede --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrol.c @@ -0,0 +1,710 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vror.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vror.c new file mode 100644 index 000000000..b2856896f --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vror.c @@ -0,0 +1,710 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c new file mode 100644 index 000000000..76d9f8828 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c @@ -0,0 +1,486 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +} + From 3ae0730ceed8a76d1f3972bea0b0a2798352eb08 Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 03:58:50 -0700 Subject: [PATCH 044/151] [vector-crypto] Define intrinsics for the Zvbc extension Signed-off-by: eop Chen --- .../templates/vector_crypto_template.py | 11 +++++++++-- .../rvv_intrinsic_gen/vector_crypto_inst.py | 14 ++++++++++++++ 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 9574bbbb8..ffcc66908 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -22,16 +22,23 @@ operand_mnemonic_dict["vrol"] = ["vv", "vx"] operand_mnemonic_dict["vror"] = ["vv", "vx"] # saving the `vi` variant operand_mnemonic_dict["vwsll"] = ["vv", "vx"] # saving the `vi` variant +# Zvbc: Vector Carryless Multiplication +operand_mnemonic_dict["vclmul"] = ["vv", "vx"] +operand_mnemonic_dict["vclmulh"] = ["vv", "vx"] def has_vs1_input(name): - has_vs1_input_inst_set = {"vandn", "vrol", "vror", "vwsll"} + has_vs1_input_inst_set = { + "vandn", "vrol", "vror", "vwsll", "vclmul", "vclmulh" + } return name in has_vs1_input_inst_set def has_rs1_input(name): - has_rs1_input_inst_set = {"vandn", "vrol", "vror", "vwsll"} + has_rs1_input_inst_set = { + "vandn", "vrol", "vror", "vwsll", "vclmul", "vclmulh" + } return name in has_rs1_input_inst_set diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index 5cfe60263..e5b2625b9 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -63,3 +63,17 @@ def gen(g): decorators.has_masking_maskedoff_policy) #################################################################### + + g.start_group("Zvbc - Vector Carryless Multiplication") + + g.function_group( + vector_crypto_template, + "Vector Carryless Multiplication", + "", # FIXME: We probably have a separate document for vector-crypto + ["vclmul", "vclmulh"], + UITYPE, + [64], + LMULS, + decorators.has_masking_maskedoff_policy) + + #################################################################### From a9da2dbd93e59c6b8bbbdbd14f9f6a8ef1762089 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 10:59:54 -0700 Subject: [PATCH 045/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 41 ++++++++++ ..._zvbc_-_vector_carryless_multiplication.md | 41 ++++++++++ .../overloaded_intrinsic_funcs.md | 41 ++++++++++ ..._zvbc_-_vector_carryless_multiplication.md | 41 ++++++++++ .../policy_funcs/intrinsic_funcs.md | 75 +++++++++++++++++++ ..._zvbc_-_vector_carryless_multiplication.md | 75 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 75 +++++++++++++++++++ ..._zvbc_-_vector_carryless_multiplication.md | 75 +++++++++++++++++++ 8 files changed, 464 insertions(+) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 80f99bfc5..72aff8e5e 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -579,3 +579,44 @@ vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); ``` + +## Zvbc - Vector Carryless Multiplication: + +### [Vector Carryless Multiplication](): + +**Prototypes:** +``` C +vuint64m1_t __riscv_vclmul_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md b/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md new file mode 100644 index 000000000..4d41e53cc --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md @@ -0,0 +1,41 @@ + +## Zvbc - Vector Carryless Multiplication: + +### [Vector Carryless Multiplication](): + +**Prototypes:** +``` C +vuint64m1_t __riscv_vclmul_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index d4d9ea35a..ecbd3ea5c 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -579,3 +579,44 @@ vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t v vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); ``` + +## Zvbc - Vector Carryless Multiplication: + +### [Vector Carryless Multiplication](): + +**Prototypes:** +``` C +vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md new file mode 100644 index 000000000..df952e521 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md @@ -0,0 +1,41 @@ + +## Zvbc - Vector Carryless Multiplication: + +### [Vector Carryless Multiplication](): + +**Prototypes:** +``` C +vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index f5ef93699..a402d3329 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -951,3 +951,78 @@ vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vu vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); ``` + +## Zvbc - Vector Carryless Multiplication: + +### [Vector Carryless Multiplication](): + +**Prototypes:** +``` C +vuint64m1_t __riscv_vclmul_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md new file mode 100644 index 000000000..7e7effc48 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md @@ -0,0 +1,75 @@ + +## Zvbc - Vector Carryless Multiplication: + +### [Vector Carryless Multiplication](): + +**Prototypes:** +``` C +vuint64m1_t __riscv_vclmul_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index c94663c42..e9d67b88e 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -951,3 +951,78 @@ vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); ``` + +## Zvbc - Vector Carryless Multiplication: + +### [Vector Carryless Multiplication](): + +**Prototypes:** +``` C +vuint64m1_t __riscv_vclmul_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md new file mode 100644 index 000000000..6d12267b2 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md @@ -0,0 +1,75 @@ + +## Zvbc - Vector Carryless Multiplication: + +### [Vector Carryless Multiplication](): + +**Prototypes:** +``` C +vuint64m1_t __riscv_vclmul_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +``` From d9beced79a999cdf3cdcb43f9b74f3e90434973f Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 10:59:55 -0700 Subject: [PATCH 046/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vclmul.c | 70 +++++++++ .../vector-crypto/api-testing/vclmulh.c | 70 +++++++++ .../vector-crypto/llvm-api-tests/vclmul.c | 71 +++++++++ .../vector-crypto/llvm-api-tests/vclmulh.c | 71 +++++++++ .../llvm-overloaded-tests/vclmul.c | 71 +++++++++ .../llvm-overloaded-tests/vclmulh.c | 71 +++++++++ .../overloaded-api-testing/vclmul.c | 70 +++++++++ .../overloaded-api-testing/vclmulh.c | 70 +++++++++ .../policy_funcs/api-testing/vclmul.c | 134 +++++++++++++++++ .../policy_funcs/api-testing/vclmulh.c | 134 +++++++++++++++++ .../policy_funcs/llvm-api-tests/vclmul.c | 135 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vclmulh.c | 135 ++++++++++++++++++ .../llvm-overloaded-tests/vclmul.c | 135 ++++++++++++++++++ .../llvm-overloaded-tests/vclmulh.c | 135 ++++++++++++++++++ .../overloaded-api-testing/vclmul.c | 134 +++++++++++++++++ .../overloaded-api-testing/vclmulh.c | 134 +++++++++++++++++ 16 files changed, 1640 insertions(+) create mode 100644 auto-generated/vector-crypto/api-testing/vclmul.c create mode 100644 auto-generated/vector-crypto/api-testing/vclmulh.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vclmul.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vclmulh.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vclmul.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmul.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmulh.c diff --git a/auto-generated/vector-crypto/api-testing/vclmul.c b/auto-generated/vector-crypto/api-testing/vclmul.c new file mode 100644 index 000000000..615da37c2 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vclmul.c @@ -0,0 +1,70 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8(vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vclmulh.c b/auto-generated/vector-crypto/api-testing/vclmulh.c new file mode 100644 index 000000000..37795dc1a --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vclmulh.c @@ -0,0 +1,70 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8(vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/llvm-api-tests/vclmul.c new file mode 100644 index 000000000..a56321bd7 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vclmul.c @@ -0,0 +1,71 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8(vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c new file mode 100644 index 000000000..0772acf6d --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c @@ -0,0 +1,71 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1(vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1(vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2(vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2(vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4(vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4(vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8(vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8(vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_m(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_m(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_m(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_m(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_m(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_m(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_m(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_m(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c new file mode 100644 index 000000000..36cdfb21e --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c @@ -0,0 +1,71 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul(vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul(vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul(vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul(vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c new file mode 100644 index 000000000..f5343fa97 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c @@ -0,0 +1,71 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh(vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh(vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh(vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh(vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c b/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c new file mode 100644 index 000000000..f751b2175 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c @@ -0,0 +1,70 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul(vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul(vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul(vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul(vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c b/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c new file mode 100644 index 000000000..c7a9d9d6d --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c @@ -0,0 +1,70 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh(vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh(vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh(vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh(vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(mask, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c new file mode 100644 index 000000000..bc3add0ee --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c @@ -0,0 +1,134 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c new file mode 100644 index 000000000..7ca88e340 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c @@ -0,0 +1,134 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c new file mode 100644 index 000000000..be7944419 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c @@ -0,0 +1,135 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c new file mode 100644 index 000000000..053782475 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c @@ -0,0 +1,135 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c new file mode 100644 index 000000000..15bc5f9df --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c @@ -0,0 +1,135 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c new file mode 100644 index 000000000..cdcb58c88 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c @@ -0,0 +1,135 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmul.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmul.c new file mode 100644 index 000000000..3fe950acd --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmul.c @@ -0,0 +1,134 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmulh.c new file mode 100644 index 000000000..cb04c9935 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmulh.c @@ -0,0 +1,134 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +} + +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +} + +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +} + From 0ef6f67f41765084181da1b647b9ad4c12e09aab Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 04:15:19 -0700 Subject: [PATCH 047/151] [vector-crypto] Define intrinsics for the Zvkg extension Signed-off-by: eop Chen --- .../templates/vector_crypto_template.py | 22 +++++++++++++++---- .../rvv_intrinsic_gen/vector_crypto_inst.py | 14 ++++++++++++ 2 files changed, 32 insertions(+), 4 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index ffcc66908..638964d3d 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -25,11 +25,20 @@ # Zvbc: Vector Carryless Multiplication operand_mnemonic_dict["vclmul"] = ["vv", "vx"] operand_mnemonic_dict["vclmulh"] = ["vv", "vx"] +# Zvkg: Vector GCM/GMAC +operand_mnemonic_dict["vghsh"] = ["vv"] +operand_mnemonic_dict["vgmul"] = ["vv"] + + +def has_vd_input(name): + has_vd_input_inst_set = {"vghsh", "vgmul"} + + return name in has_vd_input_inst_set def has_vs1_input(name): has_vs1_input_inst_set = { - "vandn", "vrol", "vror", "vwsll", "vclmul", "vclmulh" + "vandn", "vrol", "vror", "vwsll", "vclmul", "vclmulh", "vghsh" } return name in has_vs1_input_inst_set @@ -81,10 +90,15 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): else: kwargs["return_type"] = type_helper.v kwargs = {**kwargs, **decorator.mask_args(type_helper.m, type_helper.v)} - if op == "vwsll": - kwargs = {**kwargs, **decorator.tu_dest_args(type_helper.wv)} + # If vd is already in the input parameter, we don't need to emit another + # parameter when tail policy is TU. + if has_vd_input(op): + kwargs["vd"] = type_helper.v else: - kwargs = {**kwargs, **decorator.tu_dest_args(type_helper.v)} + if op == "vwsll": + kwargs = {**kwargs, **decorator.tu_dest_args(type_helper.wv)} + else: + kwargs = {**kwargs, **decorator.tu_dest_args(type_helper.v)} kwargs["vs2"] = type_helper.v diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index e5b2625b9..651e54b3e 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -77,3 +77,17 @@ def gen(g): decorators.has_masking_maskedoff_policy) #################################################################### + + g.start_group("Zvkg - Vector GCM/GMAC") + + g.function_group( + vector_crypto_template, + "Vector GCM/GMAC", + "", # FIXME: We probably have a separate document for vector-crypto + ["vghsh", "vgmul"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) + + #################################################################### From 1abd92687346283378fe6fc04651d3639c800090 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 10:59:59 -0700 Subject: [PATCH 048/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 18 ++++++++++++++++++ .../02_zvkg_-_vector_gcm_gmac.md | 18 ++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 18 ++++++++++++++++++ .../02_zvkg_-_vector_gcm_gmac.md | 18 ++++++++++++++++++ .../policy_funcs/intrinsic_funcs.md | 18 ++++++++++++++++++ .../02_zvkg_-_vector_gcm_gmac.md | 18 ++++++++++++++++++ .../policy_funcs/overloaded_intrinsic_funcs.md | 18 ++++++++++++++++++ .../02_zvkg_-_vector_gcm_gmac.md | 18 ++++++++++++++++++ 8 files changed, 144 insertions(+) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 72aff8e5e..1bc5b01bf 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -620,3 +620,21 @@ vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_ vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); ``` + +## Zvkg - Vector GCM/GMAC: + +### [Vector GCM/GMAC](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vghsh_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md b/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md new file mode 100644 index 000000000..5e3e8fcf8 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md @@ -0,0 +1,18 @@ + +## Zvkg - Vector GCM/GMAC: + +### [Vector GCM/GMAC](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vghsh_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index ecbd3ea5c..769cd3dd4 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -620,3 +620,21 @@ vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); ``` + +## Zvkg - Vector GCM/GMAC: + +### [Vector GCM/GMAC](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vghsh (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md new file mode 100644 index 000000000..0b3bf1254 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md @@ -0,0 +1,18 @@ + +## Zvkg - Vector GCM/GMAC: + +### [Vector GCM/GMAC](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vghsh (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index a402d3329..284440cf5 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -1026,3 +1026,21 @@ vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); ``` + +## Zvkg - Vector GCM/GMAC: + +### [Vector GCM/GMAC](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vghsh_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md new file mode 100644 index 000000000..0cd0c65e3 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md @@ -0,0 +1,18 @@ + +## Zvkg - Vector GCM/GMAC: + +### [Vector GCM/GMAC](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vghsh_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index e9d67b88e..05ec13669 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -1026,3 +1026,21 @@ vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4 vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); ``` + +## Zvkg - Vector GCM/GMAC: + +### [Vector GCM/GMAC](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vghsh_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md new file mode 100644 index 000000000..0f44b8ea2 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md @@ -0,0 +1,18 @@ + +## Zvkg - Vector GCM/GMAC: + +### [Vector GCM/GMAC](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vghsh_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` From 1ff15a28fec8a0f6ea46b13d353c0e023d540556 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:00 -0700 Subject: [PATCH 049/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vghsh.c | 26 ++++++++++++++++++ .../vector-crypto/api-testing/vgmul.c | 26 ++++++++++++++++++ .../vector-crypto/llvm-api-tests/vghsh.c | 27 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vgmul.c | 27 +++++++++++++++++++ .../llvm-overloaded-tests/vghsh.c | 27 +++++++++++++++++++ .../llvm-overloaded-tests/vgmul.c | 27 +++++++++++++++++++ .../overloaded-api-testing/vghsh.c | 26 ++++++++++++++++++ .../overloaded-api-testing/vgmul.c | 26 ++++++++++++++++++ .../policy_funcs/api-testing/vghsh.c | 26 ++++++++++++++++++ .../policy_funcs/api-testing/vgmul.c | 26 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vghsh.c | 27 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vgmul.c | 27 +++++++++++++++++++ .../llvm-overloaded-tests/vghsh.c | 27 +++++++++++++++++++ .../llvm-overloaded-tests/vgmul.c | 27 +++++++++++++++++++ .../overloaded-api-testing/vghsh.c | 26 ++++++++++++++++++ .../overloaded-api-testing/vgmul.c | 26 ++++++++++++++++++ 16 files changed, 424 insertions(+) create mode 100644 auto-generated/vector-crypto/api-testing/vghsh.c create mode 100644 auto-generated/vector-crypto/api-testing/vgmul.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vghsh.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vgmul.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vghsh.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vgmul.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vghsh.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vgmul.c diff --git a/auto-generated/vector-crypto/api-testing/vghsh.c b/auto-generated/vector-crypto/api-testing/vghsh.c new file mode 100644 index 000000000..b93ebfa2f --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vghsh.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32mf2(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m1(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m2(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m4(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m8(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vgmul.c b/auto-generated/vector-crypto/api-testing/vgmul.c new file mode 100644 index 000000000..09521d4d0 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vgmul.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/llvm-api-tests/vghsh.c new file mode 100644 index 000000000..71dcf52e5 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vghsh.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32mf2(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m1(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m2(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m4(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m8(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/llvm-api-tests/vgmul.c new file mode 100644 index 000000000..a39f3c8c0 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vgmul.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c new file mode 100644 index 000000000..5940884a9 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c new file mode 100644 index 000000000..4d254ff6c --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + +vuint32m1_t test_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + +vuint32m2_t test_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + +vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + +vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c b/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c new file mode 100644 index 000000000..8a4eb46a5 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vghsh(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c b/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c new file mode 100644 index 000000000..48c480933 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + +vuint32m1_t test_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + +vuint32m2_t test_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + +vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + +vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vgmul(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c new file mode 100644 index 000000000..48ec0cb4b --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32mf2_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m1_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m2_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m4_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m8_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c new file mode 100644 index 000000000..13b28496d --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c new file mode 100644 index 000000000..e8271d882 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32mf2_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m1_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m2_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m4_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vghsh_vv_u32m8_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c new file mode 100644 index 000000000..9f725f34a --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vgmul_vv_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c new file mode 100644 index 000000000..4c246ad78 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c new file mode 100644 index 000000000..5bad9f0f6 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + +vuint32m1_t test_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + +vuint32m2_t test_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + +vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + +vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vghsh.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vghsh.c new file mode 100644 index 000000000..eeb1718a4 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vghsh.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vghsh_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vgmul.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vgmul.c new file mode 100644 index 000000000..a50b7e4a9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vgmul.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + +vuint32m1_t test_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + +vuint32m2_t test_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + +vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + +vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vgmul_tu(vd, vs2, vl); +} + From 4b5a8be952845ba32cae8ff159670de658746690 Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 04:45:56 -0700 Subject: [PATCH 050/151] [vector-crypto] Define intrinsics for the Zvkned extension Signed-off-by: eop Chen --- .../rvv_intrinsic_gen/generator.py | 3 ++ .../templates/vector_crypto_template.py | 17 ++++++- .../rvv_intrinsic_gen/vector_crypto_inst.py | 44 +++++++++++++++++++ 3 files changed, 63 insertions(+), 1 deletion(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 1f4f9ada9..97a92f6b9 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -580,6 +580,9 @@ def output_call_arg(arg_name, type_name): if arg_name == "frm": return "__RISCV_FRM_RNE" + if arg_name == "uimm": + return "0" + return arg_name # Write test func body. diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 638964d3d..c7201db9c 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -28,10 +28,20 @@ # Zvkg: Vector GCM/GMAC operand_mnemonic_dict["vghsh"] = ["vv"] operand_mnemonic_dict["vgmul"] = ["vv"] +# Zvkned: NIST Suite: Vector AES Block Cipher +operand_mnemonic_dict["vaesef"] = ["vv", "vs"] +operand_mnemonic_dict["vaesem"] = ["vv", "vs"] +operand_mnemonic_dict["vaesdf"] = ["vv", "vs"] +operand_mnemonic_dict["vaesdm"] = ["vv", "vs"] +operand_mnemonic_dict["vaeskf1"] = ["vi"] +operand_mnemonic_dict["vaeskf2"] = ["vi"] +operand_mnemonic_dict["vaesz"] = ["vs"] def has_vd_input(name): - has_vd_input_inst_set = {"vghsh", "vgmul"} + has_vd_input_inst_set = { + "vghsh", "vgmul", "vaesef", "vaesem", "vaesdf", "vaesdm", "vaesz" + } return name in has_vd_input_inst_set @@ -76,6 +86,9 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): else: inst_info = InstInfo.get(args, decorator, InstType.VX, ExtraAttr.NO_ATTR) + elif operand_mnemonic == "vi": + inst_info = InstInfo.get(args, decorator, InstType.VI, + ExtraAttr.NO_ATTR) elif operand_mnemonic == "v": inst_info = InstInfo.get(args, decorator, InstType.V, ExtraAttr.NO_ATTR) @@ -109,6 +122,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): kwargs["rs1"] = type_helper.size_t else: kwargs["rs1"] = type_helper.s + if "vi" in operand_mnemonic_dict[op]: + kwargs["uimm"] = type_helper.size_t kwargs["vl"] = type_helper.size_t diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index 651e54b3e..f25cc6562 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -91,3 +91,47 @@ def gen(g): decorators.has_no_masking_policy) #################################################################### + + g.start_group("Zvkned - NIST Suite: Vector AES Block Cipher") + + g.function_group( + vector_crypto_template, + "Vector AES Encryption", + "", # FIXME: We probably have a separate document for vector-crypto + ["vaesef", "vaesem"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) + + g.function_group( + vector_crypto_template, + "Vector AES Decryption", + "", # FIXME: We probably have a separate document for vector-crypto + ["vaesdf", "vaesdm"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) + + g.function_group( + vector_crypto_template, + "Vector AES-128 Forward KeySchedule generation", + "", # FIXME: We probably have a separate document for vector-crypto + ["vaeskf1", "vaeskf2"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) + + g.function_group( + vector_crypto_template, + "Vector AES round zero", + "", # FIXME: We probably have a separate document for vector-crypto + ["vaesz"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) + + #################################################################### From b3eca6fedfe5c86788ce1c861a045e4edb4acbc6 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:03 -0700 Subject: [PATCH 051/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 81 +++++++++++++++++++ ...d_-_nist_suite:_vector_aes_block_cipher.md | 81 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 81 +++++++++++++++++++ ...d_-_nist_suite:_vector_aes_block_cipher.md | 81 +++++++++++++++++++ .../policy_funcs/intrinsic_funcs.md | 81 +++++++++++++++++++ ...d_-_nist_suite:_vector_aes_block_cipher.md | 81 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 81 +++++++++++++++++++ ...d_-_nist_suite:_vector_aes_block_cipher.md | 81 +++++++++++++++++++ 8 files changed, 648 insertions(+) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 1bc5b01bf..f5dd142d2 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -638,3 +638,84 @@ vuint32m2_t __riscv_vgmul_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvkned - NIST Suite: Vector AES Block Cipher: + +### [Vector AES Encryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES Decryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES-128 Forward KeySchedule generation](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector AES round zero](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesz_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md new file mode 100644 index 000000000..c059b3516 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -0,0 +1,81 @@ + +## Zvkned - NIST Suite: Vector AES Block Cipher: + +### [Vector AES Encryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES Decryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES-128 Forward KeySchedule generation](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector AES round zero](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesz_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 769cd3dd4..d4ed46f48 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -638,3 +638,84 @@ vuint32m2_t __riscv_vgmul (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvkned - NIST Suite: Vector AES Block Cipher: + +### [Vector AES Encryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES Decryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES-128 Forward KeySchedule generation](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaeskf1 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector AES round zero](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md new file mode 100644 index 000000000..3ff935a8c --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -0,0 +1,81 @@ + +## Zvkned - NIST Suite: Vector AES Block Cipher: + +### [Vector AES Encryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES Decryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES-128 Forward KeySchedule generation](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaeskf1 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector AES round zero](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index 284440cf5..673d81711 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -1044,3 +1044,84 @@ vuint32m2_t __riscv_vgmul_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t v vuint32m4_t __riscv_vgmul_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvkned - NIST Suite: Vector AES Block Cipher: + +### [Vector AES Encryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES Decryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES-128 Forward KeySchedule generation](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector AES round zero](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesz_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md new file mode 100644 index 000000000..815e0f4ea --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -0,0 +1,81 @@ + +## Zvkned - NIST Suite: Vector AES Block Cipher: + +### [Vector AES Encryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES Decryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES-128 Forward KeySchedule generation](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector AES round zero](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesz_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index 05ec13669..3d0d018ff 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -1044,3 +1044,84 @@ vuint32m2_t __riscv_vgmul_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvkned - NIST Suite: Vector AES Block Cipher: + +### [Vector AES Encryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES Decryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES-128 Forward KeySchedule generation](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaeskf1_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector AES round zero](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md new file mode 100644 index 000000000..d91cc9aee --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -0,0 +1,81 @@ + +## Zvkned - NIST Suite: Vector AES Block Cipher: + +### [Vector AES Encryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES Decryption](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` + +### [Vector AES-128 Forward KeySchedule generation](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaeskf1_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector AES round zero](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` From 6128af5a8944d17a887b24758cf08d68e28e4f7f Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:04 -0700 Subject: [PATCH 052/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vaesdf.c | 46 ++++++++++++++++++ .../vector-crypto/api-testing/vaesdm.c | 46 ++++++++++++++++++ .../vector-crypto/api-testing/vaesef.c | 46 ++++++++++++++++++ .../vector-crypto/api-testing/vaesem.c | 46 ++++++++++++++++++ .../vector-crypto/api-testing/vaeskf1.c | 26 ++++++++++ .../vector-crypto/api-testing/vaeskf2.c | 26 ++++++++++ .../vector-crypto/api-testing/vaesz.c | 26 ++++++++++ .../vector-crypto/llvm-api-tests/vaesdf.c | 47 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaesdm.c | 47 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaesef.c | 47 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaesem.c | 47 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaeskf1.c | 27 +++++++++++ .../vector-crypto/llvm-api-tests/vaeskf2.c | 27 +++++++++++ .../vector-crypto/llvm-api-tests/vaesz.c | 27 +++++++++++ .../llvm-overloaded-tests/vaesdf.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vaesdm.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vaesef.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vaesem.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vaeskf1.c | 27 +++++++++++ .../llvm-overloaded-tests/vaeskf2.c | 27 +++++++++++ .../llvm-overloaded-tests/vaesz.c | 27 +++++++++++ .../overloaded-api-testing/vaesdf.c | 46 ++++++++++++++++++ .../overloaded-api-testing/vaesdm.c | 46 ++++++++++++++++++ .../overloaded-api-testing/vaesef.c | 46 ++++++++++++++++++ .../overloaded-api-testing/vaesem.c | 46 ++++++++++++++++++ .../overloaded-api-testing/vaeskf1.c | 26 ++++++++++ .../overloaded-api-testing/vaeskf2.c | 26 ++++++++++ .../overloaded-api-testing/vaesz.c | 26 ++++++++++ .../policy_funcs/api-testing/vaesdf.c | 46 ++++++++++++++++++ .../policy_funcs/api-testing/vaesdm.c | 46 ++++++++++++++++++ .../policy_funcs/api-testing/vaesef.c | 46 ++++++++++++++++++ .../policy_funcs/api-testing/vaesem.c | 46 ++++++++++++++++++ .../policy_funcs/api-testing/vaeskf1.c | 26 ++++++++++ .../policy_funcs/api-testing/vaeskf2.c | 26 ++++++++++ .../policy_funcs/api-testing/vaesz.c | 26 ++++++++++ .../policy_funcs/llvm-api-tests/vaesdf.c | 47 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaesdm.c | 47 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaesef.c | 47 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaesem.c | 47 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaeskf1.c | 27 +++++++++++ .../policy_funcs/llvm-api-tests/vaeskf2.c | 27 +++++++++++ .../policy_funcs/llvm-api-tests/vaesz.c | 27 +++++++++++ .../llvm-overloaded-tests/vaesdf.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vaesdm.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vaesef.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vaesem.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vaeskf1.c | 27 +++++++++++ .../llvm-overloaded-tests/vaeskf2.c | 27 +++++++++++ .../llvm-overloaded-tests/vaesz.c | 27 +++++++++++ .../overloaded-api-testing/vaesdf.c | 46 ++++++++++++++++++ .../overloaded-api-testing/vaesdm.c | 46 ++++++++++++++++++ .../overloaded-api-testing/vaesef.c | 46 ++++++++++++++++++ .../overloaded-api-testing/vaesem.c | 46 ++++++++++++++++++ .../overloaded-api-testing/vaeskf1.c | 26 ++++++++++ .../overloaded-api-testing/vaeskf2.c | 26 ++++++++++ .../overloaded-api-testing/vaesz.c | 26 ++++++++++ 56 files changed, 2124 insertions(+) create mode 100644 auto-generated/vector-crypto/api-testing/vaesdf.c create mode 100644 auto-generated/vector-crypto/api-testing/vaesdm.c create mode 100644 auto-generated/vector-crypto/api-testing/vaesef.c create mode 100644 auto-generated/vector-crypto/api-testing/vaesem.c create mode 100644 auto-generated/vector-crypto/api-testing/vaeskf1.c create mode 100644 auto-generated/vector-crypto/api-testing/vaeskf2.c create mode 100644 auto-generated/vector-crypto/api-testing/vaesz.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vaesdf.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vaesdm.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vaesef.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vaesem.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vaesz.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vaesef.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vaesem.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vaesz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c diff --git a/auto-generated/vector-crypto/api-testing/vaesdf.c b/auto-generated/vector-crypto/api-testing/vaesdf.c new file mode 100644 index 000000000..17cb54972 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vaesdf.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vaesdm.c b/auto-generated/vector-crypto/api-testing/vaesdm.c new file mode 100644 index 000000000..057d8afd8 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vaesdm.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vaesef.c b/auto-generated/vector-crypto/api-testing/vaesef.c new file mode 100644 index 000000000..3576a1511 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vaesef.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vaesem.c b/auto-generated/vector-crypto/api-testing/vaesem.c new file mode 100644 index 000000000..11a17faa0 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vaesem.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vaeskf1.c b/auto-generated/vector-crypto/api-testing/vaeskf1.c new file mode 100644 index 000000000..3e31056e0 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vaeskf1.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl); +} + +vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m1(vs2, 0, vl); +} + +vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m2(vs2, 0, vl); +} + +vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m4(vs2, 0, vl); +} + +vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vaeskf2.c b/auto-generated/vector-crypto/api-testing/vaeskf2.c new file mode 100644 index 000000000..8efafda00 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vaeskf2.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32mf2(vs2, 0, vl); +} + +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m1(vs2, 0, vl); +} + +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m2(vs2, 0, vl); +} + +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m4(vs2, 0, vl); +} + +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m8(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vaesz.c b/auto-generated/vector-crypto/api-testing/vaesz.c new file mode 100644 index 000000000..1d1e99eb2 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vaesz.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c new file mode 100644 index 000000000..f2a136f0e --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c new file mode 100644 index 000000000..22bc1f416 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c new file mode 100644 index 000000000..e9a5fd0cb --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c new file mode 100644 index 000000000..707d6ef10 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c new file mode 100644 index 000000000..bd2625c66 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl); +} + +vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m1(vs2, 0, vl); +} + +vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m2(vs2, 0, vl); +} + +vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m4(vs2, 0, vl); +} + +vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c new file mode 100644 index 000000000..c83da63c1 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32mf2(vs2, 0, vl); +} + +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m1(vs2, 0, vl); +} + +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m2(vs2, 0, vl); +} + +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m4(vs2, 0, vl); +} + +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m8(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c new file mode 100644 index 000000000..f219721ca --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c new file mode 100644 index 000000000..23d4151c5 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c new file mode 100644 index 000000000..6769c8d21 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c new file mode 100644 index 000000000..51a65a3aa --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c new file mode 100644 index 000000000..4c25db1cc --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c new file mode 100644 index 000000000..3fa9b9126 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + +vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + +vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + +vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + +vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c new file mode 100644 index 000000000..7060fdf6e --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c new file mode 100644 index 000000000..7f64b61e4 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c new file mode 100644 index 000000000..968f0f6d0 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c new file mode 100644 index 000000000..070daf1cb --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c new file mode 100644 index 000000000..33b2f940a --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c new file mode 100644 index 000000000..33eb27e22 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c new file mode 100644 index 000000000..595213fe1 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + +vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + +vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + +vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + +vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c new file mode 100644 index 000000000..9c6fe0e9b --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c new file mode 100644 index 000000000..21b840e2a --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c new file mode 100644 index 000000000..296d2e28d --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c new file mode 100644 index 000000000..227aa0c7d --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c new file mode 100644 index 000000000..74edec47c --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c new file mode 100644 index 000000000..838abfc41 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c new file mode 100644 index 000000000..31bae4be0 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c new file mode 100644 index 000000000..da2024633 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m4_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m8_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c new file mode 100644 index 000000000..d0b5008ff --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c new file mode 100644 index 000000000..6eae5528a --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c new file mode 100644 index 000000000..39900c92e --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c new file mode 100644 index 000000000..29c44c80d --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c new file mode 100644 index 000000000..48a92787a --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c new file mode 100644 index 000000000..6dc0b6dba --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c new file mode 100644 index 000000000..17b588d02 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m4_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_vi_u32m8_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c new file mode 100644 index 000000000..bfc56949d --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c new file mode 100644 index 000000000..3caa6a027 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c new file mode 100644 index 000000000..8c6cca9f9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c new file mode 100644 index 000000000..90a6e891e --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c new file mode 100644 index 000000000..6eef057b9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c new file mode 100644 index 000000000..a503020a9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c new file mode 100644 index 000000000..2c459fd63 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c new file mode 100644 index 000000000..cbe231c57 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c new file mode 100644 index 000000000..bbd18b8f9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c new file mode 100644 index 000000000..9c8089587 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c new file mode 100644 index 000000000..0afd0df05 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c new file mode 100644 index 000000000..91d2cb885 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c new file mode 100644 index 000000000..c37bb0a86 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c new file mode 100644 index 000000000..4f0e78cf9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c new file mode 100644 index 000000000..7cbbb2e4f --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + From 3afcd16ae903f74c584d28c05dc27ee3aa0ed564 Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 04:53:47 -0700 Subject: [PATCH 053/151] [vector-crypto] Define intrinsics for the Zvknh[ab] extension Signed-off-by: eop Chen --- .../templates/vector_crypto_template.py | 10 ++++++-- .../rvv_intrinsic_gen/vector_crypto_inst.py | 25 +++++++++++++++++++ 2 files changed, 33 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index c7201db9c..eed16e39e 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -36,11 +36,16 @@ operand_mnemonic_dict["vaeskf1"] = ["vi"] operand_mnemonic_dict["vaeskf2"] = ["vi"] operand_mnemonic_dict["vaesz"] = ["vs"] +# Zvkned: NIST Suite: Vector AES Block Cipher +operand_mnemonic_dict["vsha2ms"] = ["vv"] +operand_mnemonic_dict["vsha2ch"] = ["vv"] +operand_mnemonic_dict["vsha2cl"] = ["vv"] def has_vd_input(name): has_vd_input_inst_set = { - "vghsh", "vgmul", "vaesef", "vaesem", "vaesdf", "vaesdm", "vaesz" + "vghsh", "vgmul", "vaesef", "vaesem", "vaesdf", "vaesdm", "vaesz", + "vsha2ms", "vsha2ch", "vsha2cl" } return name in has_vd_input_inst_set @@ -48,7 +53,8 @@ def has_vd_input(name): def has_vs1_input(name): has_vs1_input_inst_set = { - "vandn", "vrol", "vror", "vwsll", "vclmul", "vclmulh", "vghsh" + "vandn", "vrol", "vror", "vwsll", "vclmul", "vclmulh", "vghsh", "vsha2ms", + "vsha2ch", "vsha2cl" } return name in has_vs1_input_inst_set diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index f25cc6562..f6b9d63c5 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -135,3 +135,28 @@ def gen(g): decorators.has_no_masking_policy) #################################################################### + + g.start_group("Zvknh - NIST Suite: Vector SHA-2 Secure Hash") + + g.function_group( + vector_crypto_template, + "Vector SHA-2 message schedule", + "", # FIXME: We probably have a separate document for vector-crypto + ["vsha2ms"], + UITYPE, + [32, 64], + LMULS, + decorators.has_no_masking_policy) + + g.function_group( + vector_crypto_template, + "Vector SHA-2 two rounds of compression", + "", # FIXME: We probably have a separate document for vector-crypto + ["vsha2ch", "vsha2cl"], + UITYPE, + [32, 64], + LMULS, + decorators.has_no_masking_policy) + + +#################################################################### From befc490d07bfae2e2d60ac5c882009571e09fe2a Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:06 -0700 Subject: [PATCH 054/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 41 +++++++++++++++++++ ..._-_nist_suite:_vector_sha-2_secure_hash.md | 41 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 41 +++++++++++++++++++ ..._-_nist_suite:_vector_sha-2_secure_hash.md | 41 +++++++++++++++++++ .../policy_funcs/intrinsic_funcs.md | 41 +++++++++++++++++++ ..._-_nist_suite:_vector_sha-2_secure_hash.md | 41 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 41 +++++++++++++++++++ ..._-_nist_suite:_vector_sha-2_secure_hash.md | 41 +++++++++++++++++++ 8 files changed, 328 insertions(+) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index f5dd142d2..335b10f14 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -719,3 +719,44 @@ vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: + +### [Vector SHA-2 message schedule](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ms_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` + +### [Vector SHA-2 two rounds of compression](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ch_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md b/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md new file mode 100644 index 000000000..90db92cd4 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md @@ -0,0 +1,41 @@ + +## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: + +### [Vector SHA-2 message schedule](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ms_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` + +### [Vector SHA-2 two rounds of compression](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ch_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index d4ed46f48..15ed599bd 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -719,3 +719,44 @@ vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: + +### [Vector SHA-2 message schedule](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ms (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` + +### [Vector SHA-2 two rounds of compression](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ch (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md new file mode 100644 index 000000000..2b8a36920 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md @@ -0,0 +1,41 @@ + +## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: + +### [Vector SHA-2 message schedule](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ms (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` + +### [Vector SHA-2 two rounds of compression](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ch (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index 673d81711..fb6b81b42 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -1125,3 +1125,44 @@ vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t v vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: + +### [Vector SHA-2 message schedule](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ms_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` + +### [Vector SHA-2 two rounds of compression](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ch_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md new file mode 100644 index 000000000..c6a2a611f --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md @@ -0,0 +1,41 @@ + +## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: + +### [Vector SHA-2 message schedule](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ms_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` + +### [Vector SHA-2 two rounds of compression](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ch_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index 3d0d018ff..ff694aef4 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -1125,3 +1125,44 @@ vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: + +### [Vector SHA-2 message schedule](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ms_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` + +### [Vector SHA-2 two rounds of compression](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ch_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md new file mode 100644 index 000000000..7f060208e --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md @@ -0,0 +1,41 @@ + +## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: + +### [Vector SHA-2 message schedule](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ms_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` + +### [Vector SHA-2 two rounds of compression](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsha2ch_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +``` From 9734e5c4da8eba817e9beb51c2a5e6fe6188a3c6 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:07 -0700 Subject: [PATCH 055/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vsha2ch.c | 42 ++++++++++++++++++ .../vector-crypto/api-testing/vsha2cl.c | 42 ++++++++++++++++++ .../vector-crypto/api-testing/vsha2ms.c | 42 ++++++++++++++++++ .../vector-crypto/llvm-api-tests/vsha2ch.c | 43 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vsha2cl.c | 43 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vsha2ms.c | 43 +++++++++++++++++++ .../llvm-overloaded-tests/vsha2ch.c | 43 +++++++++++++++++++ .../llvm-overloaded-tests/vsha2cl.c | 43 +++++++++++++++++++ .../llvm-overloaded-tests/vsha2ms.c | 43 +++++++++++++++++++ .../overloaded-api-testing/vsha2ch.c | 42 ++++++++++++++++++ .../overloaded-api-testing/vsha2cl.c | 42 ++++++++++++++++++ .../overloaded-api-testing/vsha2ms.c | 42 ++++++++++++++++++ .../policy_funcs/api-testing/vsha2ch.c | 42 ++++++++++++++++++ .../policy_funcs/api-testing/vsha2cl.c | 42 ++++++++++++++++++ .../policy_funcs/api-testing/vsha2ms.c | 42 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vsha2ch.c | 43 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vsha2cl.c | 43 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vsha2ms.c | 43 +++++++++++++++++++ .../llvm-overloaded-tests/vsha2ch.c | 43 +++++++++++++++++++ .../llvm-overloaded-tests/vsha2cl.c | 43 +++++++++++++++++++ .../llvm-overloaded-tests/vsha2ms.c | 43 +++++++++++++++++++ .../overloaded-api-testing/vsha2ch.c | 42 ++++++++++++++++++ .../overloaded-api-testing/vsha2cl.c | 42 ++++++++++++++++++ .../overloaded-api-testing/vsha2ms.c | 42 ++++++++++++++++++ 24 files changed, 1020 insertions(+) create mode 100644 auto-generated/vector-crypto/api-testing/vsha2ch.c create mode 100644 auto-generated/vector-crypto/api-testing/vsha2cl.c create mode 100644 auto-generated/vector-crypto/api-testing/vsha2ms.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ch.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2cl.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ms.c diff --git a/auto-generated/vector-crypto/api-testing/vsha2ch.c b/auto-generated/vector-crypto/api-testing/vsha2ch.c new file mode 100644 index 000000000..8407a75e1 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vsha2ch.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m1(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m2(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m4(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m8(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m1(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m2(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m4(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m8(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vsha2cl.c b/auto-generated/vector-crypto/api-testing/vsha2cl.c new file mode 100644 index 000000000..e7a37c2e7 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vsha2cl.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m1(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m2(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m4(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m8(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m1(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m2(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m4(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m8(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vsha2ms.c b/auto-generated/vector-crypto/api-testing/vsha2ms.c new file mode 100644 index 000000000..65b6fc728 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vsha2ms.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m1(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m2(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m4(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m8(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m1(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m2(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m4(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m8(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c new file mode 100644 index 000000000..046495c35 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m1(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m2(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m4(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m8(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m1(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m2(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m4(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m8(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c new file mode 100644 index 000000000..442946790 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m1(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m2(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m4(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m8(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m1(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m2(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m4(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m8(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c new file mode 100644 index 000000000..76cf625eb --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m1(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m2(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m4(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m8(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m1(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m2(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m4(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m8(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c new file mode 100644 index 000000000..63d6c5aea --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c new file mode 100644 index 000000000..c16a3b774 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c new file mode 100644 index 000000000..c795ac036 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c new file mode 100644 index 000000000..e581f6f43 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ch(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c new file mode 100644 index 000000000..9a839357b --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2cl(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c new file mode 100644 index 000000000..c6d912d62 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ms(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c new file mode 100644 index 000000000..9940b82c2 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m1_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m2_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m4_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m8_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m1_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m2_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m4_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m8_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c new file mode 100644 index 000000000..11360869d --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m1_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m2_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m4_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m8_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m1_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m2_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m4_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m8_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c new file mode 100644 index 000000000..b9e9f83b2 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m1_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m2_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m4_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m8_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m1_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m2_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m4_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m8_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c new file mode 100644 index 000000000..42785c045 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m1_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m2_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m4_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u32m8_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m1_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m2_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m4_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ch_vv_u64m8_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c new file mode 100644 index 000000000..d3aa58e49 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m1_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m2_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m4_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u32m8_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m1_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m2_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m4_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2cl_vv_u64m8_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c new file mode 100644 index 000000000..1641cffed --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m1_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m2_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m4_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u32m8_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m1_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m2_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m4_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ms_vv_u64m8_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c new file mode 100644 index 000000000..d83ec593f --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c new file mode 100644 index 000000000..7f9c2327b --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c new file mode 100644 index 000000000..6648d4381 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c @@ -0,0 +1,43 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ch.c new file mode 100644 index 000000000..cf1afc07f --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ch.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2cl.c new file mode 100644 index 000000000..a385bfd49 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2cl.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ms.c new file mode 100644 index 000000000..ae5e74fff --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ms.c @@ -0,0 +1,42 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + +vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); +} + From b8cde69395e2560d9c5daf121d6b43570082dc92 Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 05:02:45 -0700 Subject: [PATCH 056/151] [vector-crypto] Define intrinsics for the Zknsed extension Signed-off-by: eop Chen --- .../rvv_intrinsic_gen/generator.py | 2 +- .../templates/vector_crypto_template.py | 5 +++- .../rvv_intrinsic_gen/vector_crypto_inst.py | 23 +++++++++++++++++++ 3 files changed, 28 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 97a92f6b9..8f1653ccb 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -257,7 +257,7 @@ def get_overloaded_op_name(name): overloaded_name = "_".join([sn[0], sn[1], sn[-1]]) elif any(op in name for op in [ "vzext", "vsext", "vwadd", "vwsub", "vfwadd", "vfwsub", "vwadd", - "vwsub", "vfwadd", "vfwsub", "vmv", "vfmv" + "vwsub", "vfwadd", "vfwsub", "vmv", "vfmv", "vsm4r" ]): # 2. compiler can not distinguish *.wx and *.vx, need encode them in # suffix, for example: diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index eed16e39e..da21ae67f 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -40,12 +40,15 @@ operand_mnemonic_dict["vsha2ms"] = ["vv"] operand_mnemonic_dict["vsha2ch"] = ["vv"] operand_mnemonic_dict["vsha2cl"] = ["vv"] +# Zvkned: NIST Suite: Vector AES Block Cipher +operand_mnemonic_dict["vsm4k"] = ["vi"] +operand_mnemonic_dict["vsm4r"] = ["vv", "vs"] def has_vd_input(name): has_vd_input_inst_set = { "vghsh", "vgmul", "vaesef", "vaesem", "vaesdf", "vaesdm", "vaesz", - "vsha2ms", "vsha2ch", "vsha2cl" + "vsha2ms", "vsha2ch", "vsha2cl", "vsm4r" } return name in has_vd_input_inst_set diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index f6b9d63c5..47c40eb59 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -158,5 +158,28 @@ def gen(g): LMULS, decorators.has_no_masking_policy) + #################################################################### + + g.start_group("Zvksed - ShangMi Suite: SM4 Block Cipher") + + g.function_group( + vector_crypto_template, + "Vector SM4 KeyExpansion", + "", # FIXME: We probably have a separate document for vector-crypto + ["vsm4k"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) + + g.function_group( + vector_crypto_template, + "Vector SM4 Rounds", + "", # FIXME: We probably have a separate document for vector-crypto + ["vsm4r"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) #################################################################### From 2e098210585e8f1c600973d3f08c5c1db3de1fc1 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:11 -0700 Subject: [PATCH 057/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 29 +++++++++++++++++++ ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 29 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 29 +++++++++++++++++++ ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 29 +++++++++++++++++++ .../policy_funcs/intrinsic_funcs.md | 29 +++++++++++++++++++ ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 29 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 29 +++++++++++++++++++ ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 29 +++++++++++++++++++ 8 files changed, 232 insertions(+) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 335b10f14..3a9b4076b 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -760,3 +760,32 @@ vuint64m2_t __riscv_vsha2cl_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2 vuint64m4_t __riscv_vsha2cl_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ``` + +## Zvksed - ShangMi Suite: SM4 Block Cipher: + +### [Vector SM4 KeyExpansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4k_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector SM4 Rounds](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md new file mode 100644 index 000000000..c78e8cbf2 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -0,0 +1,29 @@ + +## Zvksed - ShangMi Suite: SM4 Block Cipher: + +### [Vector SM4 KeyExpansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4k_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector SM4 Rounds](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 15ed599bd..1eb2b3105 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -760,3 +760,32 @@ vuint64m2_t __riscv_vsha2cl (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, s vuint64m4_t __riscv_vsha2cl (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ``` + +## Zvksed - ShangMi Suite: SM4 Block Cipher: + +### [Vector SM4 KeyExpansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4k (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector SM4 Rounds](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md new file mode 100644 index 000000000..5e8da0f1a --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -0,0 +1,29 @@ + +## Zvksed - ShangMi Suite: SM4 Block Cipher: + +### [Vector SM4 KeyExpansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4k (vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k (vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k (vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k (vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector SM4 Rounds](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index fb6b81b42..1ea845159 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -1166,3 +1166,32 @@ vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint6 vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ``` + +## Zvksed - ShangMi Suite: SM4 Block Cipher: + +### [Vector SM4 KeyExpansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector SM4 Rounds](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md new file mode 100644 index 000000000..7098d3485 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -0,0 +1,29 @@ + +## Zvksed - ShangMi Suite: SM4 Block Cipher: + +### [Vector SM4 KeyExpansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector SM4 Rounds](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index ff694aef4..a612e8d27 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -1166,3 +1166,32 @@ vuint64m2_t __riscv_vsha2cl_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1 vuint64m4_t __riscv_vsha2cl_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ``` + +## Zvksed - ShangMi Suite: SM4 Block Cipher: + +### [Vector SM4 KeyExpansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4k_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector SM4 Rounds](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md new file mode 100644 index 000000000..4f1dce398 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -0,0 +1,29 @@ + +## Zvksed - ShangMi Suite: SM4 Block Cipher: + +### [Vector SM4 KeyExpansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4k_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +``` + +### [Vector SM4 Rounds](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +``` From aac6ce20f4652c9c8442badf0e0034c1c4964911 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:12 -0700 Subject: [PATCH 058/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vsm4k.c | 26 ++++++++++ .../vector-crypto/api-testing/vsm4r.c | 46 ++++++++++++++++++ .../vector-crypto/llvm-api-tests/vsm4k.c | 27 +++++++++++ .../vector-crypto/llvm-api-tests/vsm4r.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vsm4k.c | 27 +++++++++++ .../llvm-overloaded-tests/vsm4r.c | 47 +++++++++++++++++++ .../overloaded-api-testing/vsm4k.c | 26 ++++++++++ .../overloaded-api-testing/vsm4r.c | 46 ++++++++++++++++++ .../policy_funcs/api-testing/vsm4k.c | 26 ++++++++++ .../policy_funcs/api-testing/vsm4r.c | 46 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vsm4k.c | 27 +++++++++++ .../policy_funcs/llvm-api-tests/vsm4r.c | 47 +++++++++++++++++++ .../llvm-overloaded-tests/vsm4k.c | 27 +++++++++++ .../llvm-overloaded-tests/vsm4r.c | 47 +++++++++++++++++++ .../overloaded-api-testing/vsm4k.c | 26 ++++++++++ .../overloaded-api-testing/vsm4r.c | 46 ++++++++++++++++++ 16 files changed, 584 insertions(+) create mode 100644 auto-generated/vector-crypto/api-testing/vsm4k.c create mode 100644 auto-generated/vector-crypto/api-testing/vsm4r.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vsm4k.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vsm4r.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c diff --git a/auto-generated/vector-crypto/api-testing/vsm4k.c b/auto-generated/vector-crypto/api-testing/vsm4k.c new file mode 100644 index 000000000..af05ac455 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vsm4k.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32mf2(vs2, 0, vl); +} + +vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m1(vs2, 0, vl); +} + +vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m2(vs2, 0, vl); +} + +vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m4(vs2, 0, vl); +} + +vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m8(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vsm4r.c b/auto-generated/vector-crypto/api-testing/vsm4r.c new file mode 100644 index 000000000..d2c7e2dc5 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vsm4r.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c new file mode 100644 index 000000000..ed2010cbe --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32mf2(vs2, 0, vl); +} + +vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m1(vs2, 0, vl); +} + +vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m2(vs2, 0, vl); +} + +vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m4(vs2, 0, vl); +} + +vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m8(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c new file mode 100644 index 000000000..ace127f46 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl); +} + +vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m1(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m2(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m4(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m8(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c new file mode 100644 index 000000000..831212f2f --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + +vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + +vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + +vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + +vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c new file mode 100644 index 000000000..c081ecfc5 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c new file mode 100644 index 000000000..4f0f38cd7 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + +vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + +vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + +vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + +vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k(vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c new file mode 100644 index 000000000..ef90532c9 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vv(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c new file mode 100644 index 000000000..b5ac42cfc --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m4_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m8_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c new file mode 100644 index 000000000..4f8e84562 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c new file mode 100644 index 000000000..305cb8b84 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m1_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m2_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m4_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_vi_u32m8_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c new file mode 100644 index 000000000..66f972add --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c new file mode 100644 index 000000000..b7587ab32 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c new file mode 100644 index 000000000..a8071d89c --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c @@ -0,0 +1,47 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c new file mode 100644 index 000000000..bff1c2f7f --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c new file mode 100644 index 000000000..14dde00cf --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c @@ -0,0 +1,46 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vv_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + From 925f6228e40dc2e2c2f5e4522bdbded7badf1076 Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 05:10:27 -0700 Subject: [PATCH 059/151] [vector-crypto] Define intrinsics for the Zvksh extension Signed-off-by: eop Chen --- .../templates/vector_crypto_template.py | 7 ++++-- .../rvv_intrinsic_gen/vector_crypto_inst.py | 25 +++++++++++++++++++ 2 files changed, 30 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index da21ae67f..6cab56159 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -43,12 +43,15 @@ # Zvkned: NIST Suite: Vector AES Block Cipher operand_mnemonic_dict["vsm4k"] = ["vi"] operand_mnemonic_dict["vsm4r"] = ["vv", "vs"] +# Zvksh: ShangMi Suite: SM3 Secure Hash +operand_mnemonic_dict["vsm3me"] = ["vv"] +operand_mnemonic_dict["vsm3c"] = ["vi"] def has_vd_input(name): has_vd_input_inst_set = { "vghsh", "vgmul", "vaesef", "vaesem", "vaesdf", "vaesdm", "vaesz", - "vsha2ms", "vsha2ch", "vsha2cl", "vsm4r" + "vsha2ms", "vsha2ch", "vsha2cl", "vsm4r", "vsm3c" } return name in has_vd_input_inst_set @@ -57,7 +60,7 @@ def has_vd_input(name): def has_vs1_input(name): has_vs1_input_inst_set = { "vandn", "vrol", "vror", "vwsll", "vclmul", "vclmulh", "vghsh", "vsha2ms", - "vsha2ch", "vsha2cl" + "vsha2ch", "vsha2cl", "vsm3me" } return name in has_vs1_input_inst_set diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index 47c40eb59..d02482f37 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -182,4 +182,29 @@ def gen(g): LMULS, decorators.has_no_masking_policy) + #################################################################### + + g.start_group("Zvksh - ShangMi Suite: SM3 Secure Hash") + + g.function_group( + vector_crypto_template, + "Vector SM3 Message Expansion", + "", # FIXME: We probably have a separate document for vector-crypto + ["vsm3me"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) + + g.function_group( + vector_crypto_template, + "Vector SM3 Message Expansion", + "", # FIXME: We probably have a separate document for vector-crypto + ["vsm3c"], + UITYPE, + [32], + LMULS, + decorators.has_no_masking_policy) + + #################################################################### From e8bcfea4337172c31734926882a2dc19653ac75d Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:15 -0700 Subject: [PATCH 060/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 24 +++++++++++++++++++ ..._zvksh_-_shangmi_suite:_sm3_secure_hash.md | 24 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 24 +++++++++++++++++++ ..._zvksh_-_shangmi_suite:_sm3_secure_hash.md | 24 +++++++++++++++++++ .../policy_funcs/intrinsic_funcs.md | 24 +++++++++++++++++++ ..._zvksh_-_shangmi_suite:_sm3_secure_hash.md | 24 +++++++++++++++++++ .../overloaded_intrinsic_funcs.md | 24 +++++++++++++++++++ ..._zvksh_-_shangmi_suite:_sm3_secure_hash.md | 24 +++++++++++++++++++ 8 files changed, 192 insertions(+) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 3a9b4076b..8d3b239fb 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -789,3 +789,27 @@ vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvksh - ShangMi Suite: SM3 Secure Hash: + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3me_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +``` + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3c_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md b/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md new file mode 100644 index 000000000..621c42e24 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md @@ -0,0 +1,24 @@ + +## Zvksh - ShangMi Suite: SM3 Secure Hash: + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3me_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +``` + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3c_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 1eb2b3105..7db91b67b 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -789,3 +789,27 @@ vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvksh - ShangMi Suite: SM3 Secure Hash: + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3me (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +``` + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3c (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md new file mode 100644 index 000000000..a904879b0 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md @@ -0,0 +1,24 @@ + +## Zvksh - ShangMi Suite: SM3 Secure Hash: + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3me (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +``` + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3c (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index 1ea845159..1c5b4a77f 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -1195,3 +1195,27 @@ vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t v vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvksh - ShangMi Suite: SM3 Secure Hash: + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +``` + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md new file mode 100644 index 000000000..afc57afff --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md @@ -0,0 +1,24 @@ + +## Zvksh - ShangMi Suite: SM3 Secure Hash: + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +``` + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index a612e8d27..e0fd11c20 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -1195,3 +1195,27 @@ vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` + +## Zvksh - ShangMi Suite: SM3 Secure Hash: + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3me_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +``` + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3c_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md new file mode 100644 index 000000000..cb93f408d --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md @@ -0,0 +1,24 @@ + +## Zvksh - ShangMi Suite: SM3 Secure Hash: + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3me_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +``` + +### [Vector SM3 Message Expansion](): + +**Prototypes:** +``` C +vuint32mf2_t __riscv_vsm3c_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +``` From 7aa2ea1be5533e4a1eccb40722dc036c0c6dbb47 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 17 Jul 2023 11:00:16 -0700 Subject: [PATCH 061/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vsm3c.c | 26 ++++++++++++++++++ .../vector-crypto/api-testing/vsm3me.c | 26 ++++++++++++++++++ .../vector-crypto/llvm-api-tests/vsm3c.c | 27 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vsm3me.c | 27 +++++++++++++++++++ .../llvm-overloaded-tests/vsm3c.c | 27 +++++++++++++++++++ .../llvm-overloaded-tests/vsm3me.c | 27 +++++++++++++++++++ .../overloaded-api-testing/vsm3c.c | 26 ++++++++++++++++++ .../overloaded-api-testing/vsm3me.c | 26 ++++++++++++++++++ .../policy_funcs/api-testing/vsm3c.c | 26 ++++++++++++++++++ .../policy_funcs/api-testing/vsm3me.c | 26 ++++++++++++++++++ .../policy_funcs/llvm-api-tests/vsm3c.c | 27 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vsm3me.c | 27 +++++++++++++++++++ .../llvm-overloaded-tests/vsm3c.c | 27 +++++++++++++++++++ .../llvm-overloaded-tests/vsm3me.c | 27 +++++++++++++++++++ .../overloaded-api-testing/vsm3c.c | 26 ++++++++++++++++++ .../overloaded-api-testing/vsm3me.c | 26 ++++++++++++++++++ 16 files changed, 424 insertions(+) create mode 100644 auto-generated/vector-crypto/api-testing/vsm3c.c create mode 100644 auto-generated/vector-crypto/api-testing/vsm3me.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vsm3c.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vsm3me.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3me.c diff --git a/auto-generated/vector-crypto/api-testing/vsm3c.c b/auto-generated/vector-crypto/api-testing/vsm3c.c new file mode 100644 index 000000000..6c82dfe7c --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vsm3c.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl); +} + +vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m1(vd, vs2, 0, vl); +} + +vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m2(vd, vs2, 0, vl); +} + +vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m4(vd, vs2, 0, vl); +} + +vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m8(vd, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/api-testing/vsm3me.c b/auto-generated/vector-crypto/api-testing/vsm3me.c new file mode 100644 index 000000000..5dd3d4007 --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vsm3me.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32mf2(vs2, vs1, vl); +} + +vuint32m1_t test_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m1(vs2, vs1, vl); +} + +vuint32m2_t test_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m2(vs2, vs1, vl); +} + +vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m4(vs2, vs1, vl); +} + +vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m8(vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c new file mode 100644 index 000000000..1f304271f --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl); +} + +vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m1(vd, vs2, 0, vl); +} + +vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m2(vd, vs2, 0, vl); +} + +vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m4(vd, vs2, 0, vl); +} + +vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m8(vd, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c new file mode 100644 index 000000000..ce4673c23 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32mf2(vs2, vs1, vl); +} + +vuint32m1_t test_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m1(vs2, vs1, vl); +} + +vuint32m2_t test_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m2(vs2, vs1, vl); +} + +vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m4(vs2, vs1, vl); +} + +vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m8(vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c new file mode 100644 index 000000000..6bd539576 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + +vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + +vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + +vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + +vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c new file mode 100644 index 000000000..1c1ad44b4 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + +vuint32m1_t test_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + +vuint32m2_t test_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + +vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + +vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c new file mode 100644 index 000000000..7b204cdfc --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + +vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + +vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + +vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + +vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c(vd, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c new file mode 100644 index 000000000..60d967f88 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + +vuint32m1_t test_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + +vuint32m2_t test_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + +vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + +vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me(vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c new file mode 100644 index 000000000..6551ba803 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl); +} + +vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m1_tu(vd, vs2, 0, vl); +} + +vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m2_tu(vd, vs2, 0, vl); +} + +vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m4_tu(vd, vs2, 0, vl); +} + +vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m8_tu(vd, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c new file mode 100644 index 000000000..3df7ce142 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c new file mode 100644 index 000000000..61854b2d6 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl); +} + +vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m1_tu(vd, vs2, 0, vl); +} + +vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m2_tu(vd, vs2, 0, vl); +} + +vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m4_tu(vd, vs2, 0, vl); +} + +vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_vi_u32m8_tu(vd, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c new file mode 100644 index 000000000..f161c3a7c --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c new file mode 100644 index 000000000..15fd70fab --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + +vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + +vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + +vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + +vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c new file mode 100644 index 000000000..639a153fc --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c @@ -0,0 +1,27 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c new file mode 100644 index 000000000..070583ec5 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + +vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + +vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + +vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + +vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { + return __riscv_vsm3c_tu(vd, vs2, 0, vl); +} + diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3me.c new file mode 100644 index 000000000..46ddc2d66 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3me.c @@ -0,0 +1,26 @@ +#include +#include + +typedef _Float16 float16_t; +typedef float float32_t; +typedef double float64_t; +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +} + From 9ce8f69bb6f4a090956c297acbf66d9a08449927 Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 1 Jun 2023 05:38:11 -0700 Subject: [PATCH 062/151] [Makefile] Add the vector crypto generation to golden check in CI Signed-off-by: eop Chen --- rvv-intrinsic-generator/Makefile | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index 31de080e5..c789e419a 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -346,6 +346,15 @@ diff-autogen: $(call check_defined, TEST_DIR, output directory for documents/tests generation) rm -rf ${abspath ${TEST_DIR}} make OUTPUT_DIR=${TEST_DIR} + make EXTRA_FLAG=--gen-vector-crypto OUTPUT_DIR=${TEST_DIR}/vector-crypto + +# Remove redundant folder created for vector crypto. The reason this line is +# needed is because the targets in this Makefile to generate compatible header +# creates a folder in prior before running the script. The vector crypto, +# however, does not need compatible header because it does not exist before +# v0.10. + rm -rf ${TEST_DIR}/vector-crypto/rvv-v0p10-compatible-headers + diff -qr ${TEST_DIR} ${GOLDEN_DIR} ############################################################################### From 109cb6149d828b53c4eaf9f3eb5ea4df7d917d33 Mon Sep 17 00:00:00 2001 From: eopXD Date: Mon, 31 Jul 2023 23:46:08 -0700 Subject: [PATCH 063/151] [vector-crypto] Add more variants for 'vs' instructions 'vs' instructions will take the first element group from `vs2`, while `vd` can be other settings of register group. This commit adds extra variants for users to choose whatever suits their need. Signed-off-by: eop Chen --- .../templates/vector_crypto_template.py | 25 +++++++++++++++---- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 6cab56159..e11704c7a 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -143,10 +143,25 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): args["SEW"] = args["WSEW"] args["LMUL"] = args["WLMUL"] - G.func( - inst_info, - name="{OP}_{MNEMONIC}_{TYPE}{SEW}m{LMUL}".format_map(args) + - decorator.func_suffix, - **kwargs) + if operand_mnemonic == "vs": + starting_from_lmul_index = lmul_list.index(args["LMUL"]) + # print(starting_from_lmul_index) + for i in range(starting_from_lmul_index, len(lmul_list)): + kwargs["return_type"] =\ + f"v{args['TYPE']}{args['SEW']}m{lmul_list[i]}_t" + kwargs["vd"] = f"v{args['TYPE']}{args['SEW']}m{lmul_list[i]}_t" + kwargs["vs2"] = f"v{args['TYPE']}{args['SEW']}m{args['LMUL']}_t" + args["LMUL"] = lmul_list[i] + G.func( + inst_info, + name="{OP}_{MNEMONIC}_{TYPE}{SEW}m{LMUL}".format_map(args) + + decorator.func_suffix, + **kwargs) + else: + G.func( + inst_info, + name="{OP}_{MNEMONIC}_{TYPE}{SEW}m{LMUL}".format_map(args) + + decorator.func_suffix, + **kwargs) G.inst_group_epilogue() From 95b9f6fa6577a3b8177bbaf14b3febc10e489a89 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 00:36:09 -0700 Subject: [PATCH 064/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 60 +++++++++++++++++++ ...d_-_nist_suite:_vector_aes_block_cipher.md | 50 ++++++++++++++++ ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 10 ++++ .../overloaded_intrinsic_funcs.md | 60 +++++++++++++++++++ ...d_-_nist_suite:_vector_aes_block_cipher.md | 50 ++++++++++++++++ ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 10 ++++ .../policy_funcs/intrinsic_funcs.md | 60 +++++++++++++++++++ ...d_-_nist_suite:_vector_aes_block_cipher.md | 50 ++++++++++++++++ ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 10 ++++ .../overloaded_intrinsic_funcs.md | 60 +++++++++++++++++++ ...d_-_nist_suite:_vector_aes_block_cipher.md | 50 ++++++++++++++++ ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 10 ++++ 12 files changed, 480 insertions(+) diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 8d3b239fb..351cd0e14 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -647,22 +647,42 @@ vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` C vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -673,22 +693,42 @@ vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl) ``` C vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -714,9 +754,19 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); **Prototypes:** ``` C vuint32mf2_t __riscv_vaesz_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -780,12 +830,22 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); ``` C vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index c059b3516..e38485a6e 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -7,22 +7,42 @@ ``` C vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -33,22 +53,42 @@ vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl) ``` C vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -74,8 +114,18 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); **Prototypes:** ``` C vuint32mf2_t __riscv_vaesz_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md index c78e8cbf2..e2991d231 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -18,12 +18,22 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); ``` C vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 7db91b67b..7ad662f5a 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -647,22 +647,42 @@ vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` C vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -673,22 +693,42 @@ vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` C vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -714,9 +754,19 @@ vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vs2, size_t uimm, size_t vl); **Prototypes:** ``` C vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -780,12 +830,22 @@ vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); ``` C vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index 3ff935a8c..23825cb8e 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -7,22 +7,42 @@ ``` C vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -33,22 +53,42 @@ vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` C vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -74,8 +114,18 @@ vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vs2, size_t uimm, size_t vl); **Prototypes:** ``` C vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md index 5e8da0f1a..cd2448263 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -18,12 +18,22 @@ vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); ``` C vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index 1c5b4a77f..4134b5604 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -1053,22 +1053,42 @@ vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t v ``` C vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -1079,22 +1099,42 @@ vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t ``` C vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -1120,9 +1160,19 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, **Prototypes:** ``` C vuint32mf2_t __riscv_vaesz_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -1186,12 +1236,22 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, s ``` C vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index 815e0f4ea..5d13a96cf 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -7,22 +7,42 @@ ``` C vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -33,22 +53,42 @@ vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t ``` C vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -74,8 +114,18 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, **Prototypes:** ``` C vuint32mf2_t __riscv_vaesz_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md index 7098d3485..2de962e21 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -18,12 +18,22 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, s ``` C vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index e0fd11c20..f68f11744 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -1053,22 +1053,42 @@ vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` C vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -1079,22 +1099,42 @@ vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` C vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -1120,9 +1160,19 @@ vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t u **Prototypes:** ``` C vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -1186,12 +1236,22 @@ vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uim ``` C vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index d91cc9aee..9fc84bc20 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -7,22 +7,42 @@ ``` C vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -33,22 +53,42 @@ vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` C vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` @@ -74,8 +114,18 @@ vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t u **Prototypes:** ``` C vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md index 4f1dce398..7487356cb 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -18,12 +18,22 @@ vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uim ``` C vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` From 07f2dadfec62f4d3091263ce68c06de3c6a23ec2 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 00:36:12 -0700 Subject: [PATCH 065/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vaesdf.c | 40 +++++++++++++++++++ .../vector-crypto/api-testing/vaesdm.c | 40 +++++++++++++++++++ .../vector-crypto/api-testing/vaesef.c | 40 +++++++++++++++++++ .../vector-crypto/api-testing/vaesem.c | 40 +++++++++++++++++++ .../vector-crypto/api-testing/vaesz.c | 40 +++++++++++++++++++ .../vector-crypto/api-testing/vsm4r.c | 40 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaesdf.c | 40 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaesdm.c | 40 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaesef.c | 40 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaesem.c | 40 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vaesz.c | 40 +++++++++++++++++++ .../vector-crypto/llvm-api-tests/vsm4r.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesdf.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesdm.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesef.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesem.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesz.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vsm4r.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesdf.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesdm.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesef.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesem.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesz.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vsm4r.c | 40 +++++++++++++++++++ .../policy_funcs/api-testing/vaesdf.c | 40 +++++++++++++++++++ .../policy_funcs/api-testing/vaesdm.c | 40 +++++++++++++++++++ .../policy_funcs/api-testing/vaesef.c | 40 +++++++++++++++++++ .../policy_funcs/api-testing/vaesem.c | 40 +++++++++++++++++++ .../policy_funcs/api-testing/vaesz.c | 40 +++++++++++++++++++ .../policy_funcs/api-testing/vsm4r.c | 40 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaesdf.c | 40 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaesdm.c | 40 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaesef.c | 40 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaesem.c | 40 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vaesz.c | 40 +++++++++++++++++++ .../policy_funcs/llvm-api-tests/vsm4r.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesdf.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesdm.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesef.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesem.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vaesz.c | 40 +++++++++++++++++++ .../llvm-overloaded-tests/vsm4r.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesdf.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesdm.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesef.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesem.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vaesz.c | 40 +++++++++++++++++++ .../overloaded-api-testing/vsm4r.c | 40 +++++++++++++++++++ 48 files changed, 1920 insertions(+) diff --git a/auto-generated/vector-crypto/api-testing/vaesdf.c b/auto-generated/vector-crypto/api-testing/vaesdf.c index 17cb54972..ec6e8b067 100644 --- a/auto-generated/vector-crypto/api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/api-testing/vaesdf.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdf_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesdm.c b/auto-generated/vector-crypto/api-testing/vaesdm.c index 057d8afd8..dd7a8ab52 100644 --- a/auto-generated/vector-crypto/api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/api-testing/vaesdm.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdm_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesef.c b/auto-generated/vector-crypto/api-testing/vaesef.c index 3576a1511..3e26be98e 100644 --- a/auto-generated/vector-crypto/api-testing/vaesef.c +++ b/auto-generated/vector-crypto/api-testing/vaesef.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesef_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesem.c b/auto-generated/vector-crypto/api-testing/vaesem.c index 11a17faa0..b47a15900 100644 --- a/auto-generated/vector-crypto/api-testing/vaesem.c +++ b/auto-generated/vector-crypto/api-testing/vaesem.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesem_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesz.c b/auto-generated/vector-crypto/api-testing/vaesz.c index 1d1e99eb2..cc4349b45 100644 --- a/auto-generated/vector-crypto/api-testing/vaesz.c +++ b/auto-generated/vector-crypto/api-testing/vaesz.c @@ -8,18 +8,58 @@ vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesz_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsm4r.c b/auto-generated/vector-crypto/api-testing/vsm4r.c index d2c7e2dc5..7c5ff7a51 100644 --- a/auto-generated/vector-crypto/api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/api-testing/vsm4r.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vsm4r_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c index f2a136f0e..4c9faed7b 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdf_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c index 22bc1f416..9cff36983 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdm_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c index e9a5fd0cb..8c7ab8abf 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesef_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c index 707d6ef10..d01b30f7c 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesem_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c index f219721ca..aad378dba 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c @@ -9,18 +9,58 @@ vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesz_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c index ace127f46..a37d743e7 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vsm4r_vs_u32mf2(vd, vs2, vl); } +vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); } +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); } +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); } +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +} + vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c index 23d4151c5..b7bd2a7b8 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdf(vd, vs2, vl); } +vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c index 6769c8d21..c23154f3a 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdm(vd, vs2, vl); } +vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c index 51a65a3aa..fe2d7fee1 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesef(vd, vs2, vl); } +vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c index 4c25db1cc..abedf1d40 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesem(vd, vs2, vl); } +vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c index 7f64b61e4..f459e124a 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c @@ -9,18 +9,58 @@ vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesz(vd, vs2, vl); } +vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c index c081ecfc5..7a1d28756 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vsm4r_vs(vd, vs2, vl); } +vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c index 968f0f6d0..5dfd28986 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdf(vd, vs2, vl); } +vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } +vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } +vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } +vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf(vd, vs2, vl); +} + vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c index 070daf1cb..6a427cc9a 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdm(vd, vs2, vl); } +vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } +vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } +vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } +vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm(vd, vs2, vl); +} + vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c index 33b2f940a..dca8acbbc 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesef(vd, vs2, vl); } +vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } +vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } +vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } +vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef(vd, vs2, vl); +} + vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c index 33eb27e22..17d8de48b 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesem(vd, vs2, vl); } +vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } +vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } +vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } +vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem(vd, vs2, vl); +} + vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c index 21b840e2a..92f09192f 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c @@ -8,18 +8,58 @@ vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesz(vd, vs2, vl); } +vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } +vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } +vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } +vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz(vd, vs2, vl); +} + vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c index ef90532c9..95bd0716a 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vsm4r_vs(vd, vs2, vl); } +vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } +vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } +vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } +vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs(vd, vs2, vl); +} + vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c index 296d2e28d..2eefcbc01 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdf_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c index 227aa0c7d..97ab441f7 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdm_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c index 74edec47c..2bcdbc400 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesef_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c index 838abfc41..0f179040e 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesem_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c index d0b5008ff..4548ca4e0 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c @@ -8,18 +8,58 @@ vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vaesz_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c index 4f8e84562..e12f9028d 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vsm4r_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c index 6eae5528a..7b77ff31d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdf_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c index 39900c92e..b7f84c4b7 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdm_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c index 29c44c80d..c1debf192 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesef_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c index 48a92787a..e5dc8630c 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesem_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c index bfc56949d..8d48a65f2 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c @@ -9,18 +9,58 @@ vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vaesz_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c index 66f972add..245a6ae12 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vsm4r_vs_u32mf2_tu(vd, vs2, vl); } +vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); } +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); } +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); } +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +} + vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c index 3caa6a027..b15017b19 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdf_tu(vd, vs2, vl); } +vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdf_tu(vd, vs2, vl); } +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdf_tu(vd, vs2, vl); } +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdf_tu(vd, vs2, vl); } +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c index 8c6cca9f9..b9933247b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdm_tu(vd, vs2, vl); } +vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdm_tu(vd, vs2, vl); } +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdm_tu(vd, vs2, vl); } +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdm_tu(vd, vs2, vl); } +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c index 90a6e891e..a0c60bc29 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesef_tu(vd, vs2, vl); } +vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesef_tu(vd, vs2, vl); } +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesef_tu(vd, vs2, vl); } +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesef_tu(vd, vs2, vl); } +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c index 6eef057b9..1cd5624ca 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesem_tu(vd, vs2, vl); } +vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesem_tu(vd, vs2, vl); } +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesem_tu(vd, vs2, vl); } +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesem_tu(vd, vs2, vl); } +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c index cbe231c57..e9cd85400 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c @@ -9,18 +9,58 @@ vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vaesz_tu(vd, vs2, vl); } +vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c index a8071d89c..f8612d784 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c @@ -13,6 +13,22 @@ vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vsm4r_vs_tu(vd, vs2, vl); } +vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } @@ -21,6 +37,18 @@ vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } @@ -29,6 +57,14 @@ vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } @@ -37,6 +73,10 @@ vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c index bbd18b8f9..4e39e8e29 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdf_tu(vd, vs2, vl); } +vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdf_tu(vd, vs2, vl); } +vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdf_tu(vd, vs2, vl); } +vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdf_tu(vd, vs2, vl); } +vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c index 9c8089587..2e1a59bb6 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdm_tu(vd, vs2, vl); } +vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdm_tu(vd, vs2, vl); } +vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdm_tu(vd, vs2, vl); } +vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdm_tu(vd, vs2, vl); } +vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c index 0afd0df05..849fc43e6 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesef_tu(vd, vs2, vl); } +vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesef_tu(vd, vs2, vl); } +vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesef_tu(vd, vs2, vl); } +vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesef_tu(vd, vs2, vl); } +vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c index 91d2cb885..ff158a365 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesem_tu(vd, vs2, vl); } +vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesem_tu(vd, vs2, vl); } +vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesem_tu(vd, vs2, vl); } +vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesem_tu(vd, vs2, vl); } +vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c index 7cbbb2e4f..40c8e9cb3 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c @@ -8,18 +8,58 @@ vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vaesz_tu(vd, vs2, vl); } +vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } +vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } +vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } +vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_tu(vd, vs2, vl); +} + vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c index 14dde00cf..abf418bb9 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c @@ -12,6 +12,22 @@ vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vsm4r_vs_tu(vd, vs2, vl); } +vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } @@ -20,6 +36,18 @@ vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } +vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } @@ -28,6 +56,14 @@ vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } +vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } @@ -36,6 +72,10 @@ vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } +vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_tu(vd, vs2, vl); +} + vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } From b2bfeb39b07c5c7c109e5f5f7e19ec92b61fdca6 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 00:30:56 -0700 Subject: [PATCH 066/151] [vector-crypto] Document the availability for vector crypto intrinsics regarding zvl extensions Signed-off-by: eop Chen --- vector_crypto_notes.adoc | 15 +++++++++++++++ 1 file changed, 15 insertions(+) create mode 100644 vector_crypto_notes.adoc diff --git a/vector_crypto_notes.adoc b/vector_crypto_notes.adoc new file mode 100644 index 000000000..e9c60396e --- /dev/null +++ b/vector_crypto_notes.adoc @@ -0,0 +1,15 @@ += Note for vector crypto intrinsics + +== Availability of vector crypto intrinsics + +Availability for the vector crypto instruction intrinsics will depend on the minimum vector length specified in the architecture via the `Zvl*b` ^0^ sub-extension. Vector length is required to be at least one EGW (element group width ^1^) long. + +Take the intrinsic of `vaesdf.vs` as an example. Given that the instruction will compute with a single element group provided from `vs2`, `vuint32mf2_t` of must be at least 128 bits long. Therefore the intrinsic requires `zvl256b` to be available. + +``` +vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +``` + +^0^ https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc#181-zvl-minimum-vector-length-standard-extensions[v-spec 18.1. Zvl*: Minimum Vector Length Standard Extensions] + +^1^ https://github.com/riscv/riscv-crypto/blob/master/doc/vector/riscv-crypto-vector-element-groups.adoc[Vector Crypto Specification: Element Groups] From 568bda66f8af84f63c52c1c7f3731c40618a7d27 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 00:39:06 -0700 Subject: [PATCH 067/151] [vector-crypto] Remove redundant variable in testing function declaration Signed-off-by: eop Chen --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 1 + 1 file changed, 1 insertion(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 8f1653ccb..09f6a86b3 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -542,6 +542,7 @@ def func(self, inst_info, name, return_type, **kwargs): # For "vxrm" parameter of the fixed-point intrinsics, value for it must be # an immediate. func_decl = func_decl.replace(", unsigned int vxrm", "") + func_decl = func_decl.replace(", size_t uimm", "") # For "frm" parameter of the floating-point intrinsics, value for it must # be an immediate. From 826b646409c40f95e5fbddb34d5e90fd9c528556 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 00:39:37 -0700 Subject: [PATCH 068/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- auto-generated/vector-crypto/api-testing/vaeskf1.c | 10 +++++----- auto-generated/vector-crypto/api-testing/vaeskf2.c | 10 +++++----- auto-generated/vector-crypto/api-testing/vsm3c.c | 10 +++++----- auto-generated/vector-crypto/api-testing/vsm4k.c | 10 +++++----- auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c | 10 +++++----- auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c | 10 +++++----- auto-generated/vector-crypto/llvm-api-tests/vsm3c.c | 10 +++++----- auto-generated/vector-crypto/llvm-api-tests/vsm4k.c | 10 +++++----- .../vector-crypto/llvm-overloaded-tests/vaeskf1.c | 10 +++++----- .../vector-crypto/llvm-overloaded-tests/vaeskf2.c | 10 +++++----- .../vector-crypto/llvm-overloaded-tests/vsm3c.c | 10 +++++----- .../vector-crypto/llvm-overloaded-tests/vsm4k.c | 10 +++++----- .../vector-crypto/overloaded-api-testing/vaeskf1.c | 10 +++++----- .../vector-crypto/overloaded-api-testing/vaeskf2.c | 10 +++++----- .../vector-crypto/overloaded-api-testing/vsm3c.c | 10 +++++----- .../vector-crypto/overloaded-api-testing/vsm4k.c | 10 +++++----- .../vector-crypto/policy_funcs/api-testing/vaeskf1.c | 10 +++++----- .../vector-crypto/policy_funcs/api-testing/vaeskf2.c | 10 +++++----- .../vector-crypto/policy_funcs/api-testing/vsm3c.c | 10 +++++----- .../vector-crypto/policy_funcs/api-testing/vsm4k.c | 10 +++++----- .../policy_funcs/llvm-api-tests/vaeskf1.c | 10 +++++----- .../policy_funcs/llvm-api-tests/vaeskf2.c | 10 +++++----- .../vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c | 10 +++++----- .../vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c | 10 +++++----- .../policy_funcs/llvm-overloaded-tests/vaeskf1.c | 10 +++++----- .../policy_funcs/llvm-overloaded-tests/vaeskf2.c | 10 +++++----- .../policy_funcs/llvm-overloaded-tests/vsm3c.c | 10 +++++----- .../policy_funcs/llvm-overloaded-tests/vsm4k.c | 10 +++++----- .../policy_funcs/overloaded-api-testing/vaeskf1.c | 10 +++++----- .../policy_funcs/overloaded-api-testing/vaeskf2.c | 10 +++++----- .../policy_funcs/overloaded-api-testing/vsm3c.c | 10 +++++----- .../policy_funcs/overloaded-api-testing/vsm4k.c | 10 +++++----- 32 files changed, 160 insertions(+), 160 deletions(-) diff --git a/auto-generated/vector-crypto/api-testing/vaeskf1.c b/auto-generated/vector-crypto/api-testing/vaeskf1.c index 3e31056e0..0d55e93ac 100644 --- a/auto-generated/vector-crypto/api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/api-testing/vaeskf1.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m1(vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m2(vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m4(vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaeskf2.c b/auto-generated/vector-crypto/api-testing/vaeskf2.c index 8efafda00..50cf20d1b 100644 --- a/auto-generated/vector-crypto/api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/api-testing/vaeskf2.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32mf2(vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m1(vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m2(vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m4(vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m8(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsm3c.c b/auto-generated/vector-crypto/api-testing/vsm3c.c index 6c82dfe7c..355f4a519 100644 --- a/auto-generated/vector-crypto/api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/api-testing/vsm3c.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl); } -vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m1(vd, vs2, 0, vl); } -vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m2(vd, vs2, 0, vl); } -vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m4(vd, vs2, 0, vl); } -vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m8(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsm4k.c b/auto-generated/vector-crypto/api-testing/vsm4k.c index af05ac455..d038e7157 100644 --- a/auto-generated/vector-crypto/api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/api-testing/vsm4k.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32mf2(vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m1(vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m2(vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m4(vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m8(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c index bd2625c66..f35c4a3b2 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m1(vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m2(vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m4(vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c index c83da63c1..036a12b52 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32mf2(vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m1(vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m2(vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m4(vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m8(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c index 1f304271f..87af91e5c 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl); } -vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m1(vd, vs2, 0, vl); } -vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m2(vd, vs2, 0, vl); } -vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m4(vd, vs2, 0, vl); } -vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m8(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c index ed2010cbe..0911bd722 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32mf2(vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m1(vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m2(vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m4(vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m8(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c index 3fa9b9126..e15daf77d 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c index 7060fdf6e..544e99fef 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c index 6bd539576..416f7a64f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } -vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } -vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } -vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } -vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c index 831212f2f..319a815f8 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c index 595213fe1..8ec38cde4 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c index 9c6fe0e9b..660b0ba39 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c index 7b204cdfc..a5bdb447f 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } -vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } -vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } -vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } -vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c index 4f0f38cd7..06728e8dd 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c index 31bae4be0..97339218d 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c index da2024633..3fcb9e9b4 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32mf2_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m1_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m2_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m4_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m8_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c index 6551ba803..b0b2246a3 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m8_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c index b5ac42cfc..05dc6da60 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32mf2_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m1_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m2_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m4_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m8_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c index 6dc0b6dba..36bc372bc 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c index 17b588d02..448191a2f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32mf2_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m1_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m2_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m4_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m8_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c index 61854b2d6..e64ed6ab7 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m8_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c index 305cb8b84..a702fee72 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32mf2_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m1_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m2_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m4_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m8_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c index a503020a9..32fb898ca 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c index 2c459fd63..605ff4a98 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c index 15fd70fab..9cbfd29cf 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c index b7587ab32..bd902bfba 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c index c37bb0a86..f531dd6af 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c index 4f0e78cf9..c43fbbb09 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c index 070583ec5..b784b6537 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c index bff1c2f7f..e1f938477 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl) { +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl) { +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl) { +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl) { +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); } From f5c1c60e38bbfdaccdd45630a09787be60c8376d Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 00:52:32 -0700 Subject: [PATCH 069/151] [vector-crypto] Bug fixes on intrinsic definitions - Add operand mnemonics for overloaded intrinsics of vaesef/vsaesem/vaesdf/vaesdm - Add vs2 operand for vaeskf2 - Fix vs2 data type for vwsll --- .../rvv_intrinsic_gen/generator.py | 3 +- .../templates/vector_crypto_template.py | 34 +++++++++++++------ 2 files changed, 25 insertions(+), 12 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 09f6a86b3..4c1d4e117 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -257,7 +257,8 @@ def get_overloaded_op_name(name): overloaded_name = "_".join([sn[0], sn[1], sn[-1]]) elif any(op in name for op in [ "vzext", "vsext", "vwadd", "vwsub", "vfwadd", "vfwsub", "vwadd", - "vwsub", "vfwadd", "vfwsub", "vmv", "vfmv", "vsm4r" + "vwsub", "vfwadd", "vfwsub", "vmv", "vfmv", "vsm4r", "vaesef", "vaesem", + "vaesdf", "vaesdm" ]): # 2. compiler can not distinguish *.wx and *.vx, need encode them in # suffix, for example: diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index e11704c7a..766ab20f2 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -51,7 +51,7 @@ def has_vd_input(name): has_vd_input_inst_set = { "vghsh", "vgmul", "vaesef", "vaesem", "vaesdf", "vaesdm", "vaesz", - "vsha2ms", "vsha2ch", "vsha2cl", "vsm4r", "vsm3c" + "vsha2ms", "vsha2ch", "vsha2cl", "vsm4r", "vsm3c", "vaeskf2" } return name in has_vd_input_inst_set @@ -114,7 +114,16 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): kwargs["return_type"] = type_helper.wv else: kwargs["return_type"] = type_helper.v - kwargs = {**kwargs, **decorator.mask_args(type_helper.m, type_helper.v)} + if op == "vwsll": + kwargs = { + **kwargs, + **decorator.mask_args(type_helper.m, type_helper.wv) + } + else: + kwargs = { + **kwargs, + **decorator.mask_args(type_helper.m, type_helper.v) + } # If vd is already in the input parameter, we don't need to emit another # parameter when tail policy is TU. if has_vd_input(op): @@ -139,10 +148,6 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): kwargs["vl"] = type_helper.size_t - if op == "vwsll": - args["SEW"] = args["WSEW"] - args["LMUL"] = args["WLMUL"] - if operand_mnemonic == "vs": starting_from_lmul_index = lmul_list.index(args["LMUL"]) # print(starting_from_lmul_index) @@ -158,10 +163,17 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): decorator.func_suffix, **kwargs) else: - G.func( - inst_info, - name="{OP}_{MNEMONIC}_{TYPE}{SEW}m{LMUL}".format_map(args) + - decorator.func_suffix, - **kwargs) + if op == "vwsll": + G.func( + inst_info, + name="{OP}_{MNEMONIC}_{TYPE}{WSEW}m{WLMUL}".format_map(args) + + decorator.func_suffix, + **kwargs) + else: + G.func( + inst_info, + name="{OP}_{MNEMONIC}_{TYPE}{SEW}m{LMUL}".format_map(args) + + decorator.func_suffix, + **kwargs) G.inst_group_epilogue() From 355e4726a8a028909fdb0ad1eaad9205b6f4d9d3 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 01:00:23 -0700 Subject: [PATCH 070/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 70 ++-- ...r_bit-manipulation_used_in_cryptography.md | 60 +-- ...d_-_nist_suite:_vector_aes_block_cipher.md | 10 +- .../overloaded_intrinsic_funcs.md | 230 +++++------ ...r_bit-manipulation_used_in_cryptography.md | 60 +-- ...d_-_nist_suite:_vector_aes_block_cipher.md | 170 ++++---- .../policy_funcs/intrinsic_funcs.md | 220 +++++----- ...r_bit-manipulation_used_in_cryptography.md | 210 +++++----- ...d_-_nist_suite:_vector_aes_block_cipher.md | 10 +- .../overloaded_intrinsic_funcs.md | 380 +++++++++--------- ...r_bit-manipulation_used_in_cryptography.md | 210 +++++----- ...d_-_nist_suite:_vector_aes_block_cipher.md | 170 ++++---- 12 files changed, 900 insertions(+), 900 deletions(-) diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 351cd0e14..92c87a657 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -518,66 +518,66 @@ vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, **Prototypes:** ``` C vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll_vv_u16m1 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll_vv_u16m2 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll_vv_u16m4 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll_vv_u16m8 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll_vv_u32mf2 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll_vv_u32m1 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll_vv_u32m2 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll_vv_u32m4 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll_vv_u32m8 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll_vv_u64m1 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll_vv_u64m2 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll_vv_u64m4 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); ``` ## Zvbc - Vector Carryless Multiplication: @@ -742,11 +742,11 @@ vuint32m1_t __riscv_vaeskf1_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf1_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf1_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf1_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ``` ### [Vector AES round zero](): diff --git a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md index 80f99bfc5..1778ca313 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -518,64 +518,64 @@ vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, **Prototypes:** ``` C vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll_vv_u16m1 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll_vv_u16m2 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll_vv_u16m4 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll_vv_u16m8 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll_vv_u32mf2 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll_vv_u32m1 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll_vv_u32m2 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll_vv_u32m4 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll_vv_u32m8 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll_vv_u64m1 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll_vv_u64m2 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll_vv_u64m4 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); ``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index e38485a6e..d4b9bff68 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -102,11 +102,11 @@ vuint32m1_t __riscv_vaeskf1_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf1_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf1_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf1_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ``` ### [Vector AES round zero](): diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 7ad662f5a..95d6da89f 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -518,66 +518,66 @@ vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) **Prototypes:** ``` C vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); ``` ## Zvbc - Vector Carryless Multiplication: @@ -645,92 +645,92 @@ vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES Decryption](): **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES-128 Forward KeySchedule generation](): @@ -742,11 +742,11 @@ vuint32m1_t __riscv_vaeskf1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf1 (vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf1 (vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf1 (vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ``` ### [Vector AES round zero](): diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md index d4d9ea35a..dfe321e52 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -518,64 +518,64 @@ vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) **Prototypes:** ``` C vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); ``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index 23825cb8e..53179ca9f 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -5,92 +5,92 @@ **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES Decryption](): **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES-128 Forward KeySchedule generation](): @@ -102,11 +102,11 @@ vuint32m1_t __riscv_vaeskf1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf1 (vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf1 (vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf1 (vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ``` ### [Vector AES round zero](): diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index 4134b5604..f5d8ad3df 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -828,128 +828,128 @@ vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuin **Prototypes:** ``` C vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); ``` ## Zvbc - Vector Carryless Multiplication: @@ -1148,11 +1148,11 @@ vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ``` ### [Vector AES round zero](): diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md index f5ef93699..0031d9a2d 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -828,126 +828,126 @@ vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuin **Prototypes:** ``` C vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index 5d13a96cf..32e41b5ce 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -102,11 +102,11 @@ vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ``` ### [Vector AES round zero](): diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index f68f11744..b84966340 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -828,128 +828,128 @@ vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t v **Prototypes:** ``` C vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); ``` ## Zvbc - Vector Carryless Multiplication: @@ -1051,92 +1051,92 @@ vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES Decryption](): **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES-128 Forward KeySchedule generation](): @@ -1148,11 +1148,11 @@ vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t u vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ``` ### [Vector AES round zero](): diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md index c94663c42..4bcf7ffbd 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -828,126 +828,126 @@ vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t v **Prototypes:** ``` C vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index 9fc84bc20..9a64c8e5d 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -5,92 +5,92 @@ **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES Decryption](): **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES-128 Forward KeySchedule generation](): @@ -102,11 +102,11 @@ vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t u vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ``` ### [Vector AES round zero](): From 026a53db1dd2f67b11988fd9046a5491f3438070 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 01:00:26 -0700 Subject: [PATCH 071/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vaeskf2.c | 20 +- .../vector-crypto/api-testing/vwsll.c | 60 ++--- .../vector-crypto/llvm-api-tests/vaeskf2.c | 20 +- .../vector-crypto/llvm-api-tests/vwsll.c | 60 ++--- .../llvm-overloaded-tests/vaesdf.c | 40 ++-- .../llvm-overloaded-tests/vaesdm.c | 40 ++-- .../llvm-overloaded-tests/vaesef.c | 40 ++-- .../llvm-overloaded-tests/vaesem.c | 40 ++-- .../llvm-overloaded-tests/vaeskf2.c | 20 +- .../llvm-overloaded-tests/vwsll.c | 60 ++--- .../overloaded-api-testing/vaesdf.c | 40 ++-- .../overloaded-api-testing/vaesdm.c | 40 ++-- .../overloaded-api-testing/vaesef.c | 40 ++-- .../overloaded-api-testing/vaesem.c | 40 ++-- .../overloaded-api-testing/vaeskf2.c | 20 +- .../overloaded-api-testing/vwsll.c | 60 ++--- .../policy_funcs/api-testing/vaeskf2.c | 20 +- .../policy_funcs/api-testing/vwsll.c | 210 +++++++++--------- .../policy_funcs/llvm-api-tests/vaeskf2.c | 20 +- .../policy_funcs/llvm-api-tests/vwsll.c | 210 +++++++++--------- .../llvm-overloaded-tests/vaesdf.c | 40 ++-- .../llvm-overloaded-tests/vaesdm.c | 40 ++-- .../llvm-overloaded-tests/vaesef.c | 40 ++-- .../llvm-overloaded-tests/vaesem.c | 40 ++-- .../llvm-overloaded-tests/vaeskf2.c | 20 +- .../llvm-overloaded-tests/vwsll.c | 210 +++++++++--------- .../overloaded-api-testing/vaesdf.c | 40 ++-- .../overloaded-api-testing/vaesdm.c | 40 ++-- .../overloaded-api-testing/vaesef.c | 40 ++-- .../overloaded-api-testing/vaesem.c | 40 ++-- .../overloaded-api-testing/vaeskf2.c | 20 +- .../overloaded-api-testing/vwsll.c | 210 +++++++++--------- 32 files changed, 940 insertions(+), 940 deletions(-) diff --git a/auto-generated/vector-crypto/api-testing/vaeskf2.c b/auto-generated/vector-crypto/api-testing/vaeskf2.c index 50cf20d1b..7509d6775 100644 --- a/auto-generated/vector-crypto/api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/api-testing/vaeskf2.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32mf2(vs2, 0, vl); +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32mf2(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m1(vs2, 0, vl); +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m1(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m2(vs2, 0, vl); +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m2(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m4(vs2, 0, vl); +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m4(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m8(vs2, 0, vl); +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m8(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vwsll.c b/auto-generated/vector-crypto/api-testing/vwsll.c index a36e5a3c6..270591974 100644 --- a/auto-generated/vector-crypto/api-testing/vwsll.c +++ b/auto-generated/vector-crypto/api-testing/vwsll.c @@ -8,7 +8,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4(vs2, rs1, vl); } @@ -16,7 +16,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2(vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2(vs2, rs1, vl); } @@ -24,7 +24,7 @@ vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1(vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1(vs2, rs1, vl); } @@ -32,7 +32,7 @@ vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2(vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2(vs2, rs1, vl); } @@ -40,7 +40,7 @@ vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4(vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4(vs2, rs1, vl); } @@ -48,7 +48,7 @@ vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8(vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8(vs2, rs1, vl); } @@ -56,7 +56,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2(vs2, rs1, vl); } @@ -64,7 +64,7 @@ vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1(vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1(vs2, rs1, vl); } @@ -72,7 +72,7 @@ vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2(vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2(vs2, rs1, vl); } @@ -80,7 +80,7 @@ vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4(vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4(vs2, rs1, vl); } @@ -88,7 +88,7 @@ vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8(vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8(vs2, rs1, vl); } @@ -96,7 +96,7 @@ vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1(vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1(vs2, rs1, vl); } @@ -104,7 +104,7 @@ vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2(vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2(vs2, rs1, vl); } @@ -112,7 +112,7 @@ vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4(vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4(vs2, rs1, vl); } @@ -120,7 +120,7 @@ vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8(vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8(vs2, rs1, vl); } @@ -128,7 +128,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl); } @@ -136,7 +136,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl); } @@ -144,7 +144,7 @@ vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t v return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl); } @@ -152,7 +152,7 @@ vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl); } @@ -160,7 +160,7 @@ vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl); } @@ -168,7 +168,7 @@ vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl); } @@ -176,7 +176,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4 return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl); } @@ -184,7 +184,7 @@ vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl); } @@ -192,7 +192,7 @@ vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t v return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl); } @@ -200,7 +200,7 @@ vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl); } @@ -208,7 +208,7 @@ vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl); } @@ -216,7 +216,7 @@ vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl); } @@ -224,7 +224,7 @@ vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t v return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl); } @@ -232,7 +232,7 @@ vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t v return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl); } @@ -240,7 +240,7 @@ vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c index 036a12b52..fee669e56 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32mf2(vs2, 0, vl); +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32mf2(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m1(vs2, 0, vl); +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m1(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m2(vs2, 0, vl); +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m2(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m4(vs2, 0, vl); +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m4(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m8(vs2, 0, vl); +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m8(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c index ca3fdaa23..70212a2c4 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c @@ -9,7 +9,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4(vs2, rs1, vl); } @@ -17,7 +17,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2(vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2(vs2, rs1, vl); } @@ -25,7 +25,7 @@ vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1(vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1(vs2, rs1, vl); } @@ -33,7 +33,7 @@ vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2(vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2(vs2, rs1, vl); } @@ -41,7 +41,7 @@ vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4(vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4(vs2, rs1, vl); } @@ -49,7 +49,7 @@ vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8(vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8(vs2, rs1, vl); } @@ -57,7 +57,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2(vs2, rs1, vl); } @@ -65,7 +65,7 @@ vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1(vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1(vs2, rs1, vl); } @@ -73,7 +73,7 @@ vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2(vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2(vs2, rs1, vl); } @@ -81,7 +81,7 @@ vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4(vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4(vs2, rs1, vl); } @@ -89,7 +89,7 @@ vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8(vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8(vs2, rs1, vl); } @@ -97,7 +97,7 @@ vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1(vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1(vs2, rs1, vl); } @@ -105,7 +105,7 @@ vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2(vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2(vs2, rs1, vl); } @@ -113,7 +113,7 @@ vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4(vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4(vs2, rs1, vl); } @@ -121,7 +121,7 @@ vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8(vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8(vs2, rs1, vl); } @@ -129,7 +129,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl); } @@ -137,7 +137,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl); } @@ -145,7 +145,7 @@ vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t v return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl); } @@ -153,7 +153,7 @@ vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl); } @@ -161,7 +161,7 @@ vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl); } @@ -169,7 +169,7 @@ vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl); } @@ -177,7 +177,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4 return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl); } @@ -185,7 +185,7 @@ vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl); } @@ -193,7 +193,7 @@ vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t v return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl); } @@ -201,7 +201,7 @@ vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl); } @@ -209,7 +209,7 @@ vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl); } @@ -217,7 +217,7 @@ vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl); } @@ -225,7 +225,7 @@ vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t v return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl); } @@ -233,7 +233,7 @@ vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t v return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl); } @@ -241,7 +241,7 @@ vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c index b7bd2a7b8..7126fd3d3 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c @@ -6,82 +6,82 @@ #include vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c index c23154f3a..6754c6e31 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c @@ -6,82 +6,82 @@ #include vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c index fe2d7fee1..076dadfed 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c @@ -6,82 +6,82 @@ #include vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c index abedf1d40..cd8da8835 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c @@ -6,82 +6,82 @@ #include vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c index 544e99fef..40dee84d2 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c index f739b1cd3..ab2309ab1 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c @@ -9,7 +9,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -17,7 +17,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -25,7 +25,7 @@ vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -33,7 +33,7 @@ vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -41,7 +41,7 @@ vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -49,7 +49,7 @@ vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -57,7 +57,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) return __riscv_vwsll(vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -65,7 +65,7 @@ vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -73,7 +73,7 @@ vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -81,7 +81,7 @@ vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -89,7 +89,7 @@ vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -97,7 +97,7 @@ vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -105,7 +105,7 @@ vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -113,7 +113,7 @@ vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -121,7 +121,7 @@ vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -129,7 +129,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -137,7 +137,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -145,7 +145,7 @@ vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t v return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -153,7 +153,7 @@ vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -161,7 +161,7 @@ vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -169,7 +169,7 @@ vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -177,7 +177,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4 return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -185,7 +185,7 @@ vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -193,7 +193,7 @@ vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t v return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -201,7 +201,7 @@ vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -209,7 +209,7 @@ vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -217,7 +217,7 @@ vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -225,7 +225,7 @@ vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t v return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -233,7 +233,7 @@ vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t v return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -241,7 +241,7 @@ vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c index 5dfd28986..a0bf2bc63 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c @@ -5,82 +5,82 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vv(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf(vd, vs2, vl); + return __riscv_vaesdf_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c index 6a427cc9a..bb7253273 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c @@ -5,82 +5,82 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vv(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm(vd, vs2, vl); + return __riscv_vaesdm_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c index dca8acbbc..df69a7db4 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c @@ -5,82 +5,82 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vv(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef(vd, vs2, vl); + return __riscv_vaesef_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c index 17d8de48b..89631199e 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c @@ -5,82 +5,82 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vv(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem(vd, vs2, vl); + return __riscv_vaesem_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c index 660b0ba39..94ff06c1a 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32m1_t test_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32m2_t test_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf2(vs2, 0, vl); +vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf2(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c b/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c index c0e0521ff..f90328a94 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c @@ -8,7 +8,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -16,7 +16,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -24,7 +24,7 @@ vuint16m1_t test_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -32,7 +32,7 @@ vuint16m2_t test_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -40,7 +40,7 @@ vuint16m4_t test_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -48,7 +48,7 @@ vuint16m8_t test_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -56,7 +56,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) return __riscv_vwsll(vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -64,7 +64,7 @@ vuint32m1_t test_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -72,7 +72,7 @@ vuint32m2_t test_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -80,7 +80,7 @@ vuint32m4_t test_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -88,7 +88,7 @@ vuint32m8_t test_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -96,7 +96,7 @@ vuint64m1_t test_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -104,7 +104,7 @@ vuint64m2_t test_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -112,7 +112,7 @@ vuint64m4_t test_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -120,7 +120,7 @@ vuint64m8_t test_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } @@ -128,7 +128,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -136,7 +136,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -144,7 +144,7 @@ vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t v return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -152,7 +152,7 @@ vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -160,7 +160,7 @@ vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -168,7 +168,7 @@ vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -176,7 +176,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4 return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -184,7 +184,7 @@ vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -192,7 +192,7 @@ vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t v return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -200,7 +200,7 @@ vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -208,7 +208,7 @@ vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -216,7 +216,7 @@ vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -224,7 +224,7 @@ vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t v return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -232,7 +232,7 @@ vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t v return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } @@ -240,7 +240,7 @@ vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs return __riscv_vwsll(mask, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(mask, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c index 3fcb9e9b4..2451093d1 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m1_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m2_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m4_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m8_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c index a99acee03..ca8992376 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c @@ -8,7 +8,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vu return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); } @@ -16,7 +16,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vu return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); } @@ -24,7 +24,7 @@ vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl); } @@ -32,7 +32,7 @@ vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8 return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl); } @@ -40,7 +40,7 @@ vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8 return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl); } @@ -48,7 +48,7 @@ vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8 return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl); } @@ -56,7 +56,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, v return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); } @@ -64,7 +64,7 @@ vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuin return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl); } @@ -72,7 +72,7 @@ vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl); } @@ -80,7 +80,7 @@ vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl); } @@ -88,7 +88,7 @@ vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl); } @@ -96,7 +96,7 @@ vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuin return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl); } @@ -104,7 +104,7 @@ vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl); } @@ -112,7 +112,7 @@ vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl); } @@ -120,367 +120,367 @@ vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c index 448191a2f..16b8eaa79 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m1_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m2_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m4_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf2_vi_u32m8_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c index 21b1bc7e8..f9415d873 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c @@ -9,7 +9,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vu return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); } @@ -17,7 +17,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vu return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); } @@ -25,7 +25,7 @@ vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl); } @@ -33,7 +33,7 @@ vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8 return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl); } @@ -41,7 +41,7 @@ vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8 return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl); } @@ -49,7 +49,7 @@ vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8 return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl); } @@ -57,7 +57,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, v return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); } @@ -65,7 +65,7 @@ vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuin return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl); } @@ -73,7 +73,7 @@ vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl); } @@ -81,7 +81,7 @@ vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl); } @@ -89,7 +89,7 @@ vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl); } @@ -97,7 +97,7 @@ vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuin return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl); } @@ -105,7 +105,7 @@ vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl); } @@ -113,7 +113,7 @@ vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl); } @@ -121,367 +121,367 @@ vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c index b15017b19..765042dee 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c @@ -6,82 +6,82 @@ #include vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c index b9933247b..0b7cc7547 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c @@ -6,82 +6,82 @@ #include vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c index a0c60bc29..236015c96 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c @@ -6,82 +6,82 @@ #include vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c index 1cd5624ca..866bf2914 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c @@ -6,82 +6,82 @@ #include vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c index 605ff4a98..9526a18de 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c @@ -5,23 +5,23 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c index 6f9409182..dbe38e613 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c @@ -9,7 +9,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vu return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -17,7 +17,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vu return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -25,7 +25,7 @@ vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -33,7 +33,7 @@ vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8 return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -41,7 +41,7 @@ vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8 return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -49,7 +49,7 @@ vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8 return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -57,7 +57,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, v return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -65,7 +65,7 @@ vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuin return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -73,7 +73,7 @@ vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -81,7 +81,7 @@ vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -89,7 +89,7 @@ vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -97,7 +97,7 @@ vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuin return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -105,7 +105,7 @@ vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -113,7 +113,7 @@ vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -121,367 +121,367 @@ vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c index 4e39e8e29..51819499f 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c @@ -5,82 +5,82 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vv_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_tu(vd, vs2, vl); + return __riscv_vaesdf_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c index 2e1a59bb6..ae17f9b58 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c @@ -5,82 +5,82 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vv_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_tu(vd, vs2, vl); + return __riscv_vaesdm_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c index 849fc43e6..a46ede689 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c @@ -5,82 +5,82 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vv_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_tu(vd, vs2, vl); + return __riscv_vaesef_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c index ff158a365..a0930f52b 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c @@ -5,82 +5,82 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vv_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_tu(vd, vs2, vl); + return __riscv_vaesem_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c index c43fbbb09..ef989068e 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c @@ -4,23 +4,23 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf2_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c index 76d9f8828..e316013f9 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c @@ -8,7 +8,7 @@ vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vu return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -16,7 +16,7 @@ vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vu return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -24,7 +24,7 @@ vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -32,7 +32,7 @@ vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8 return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -40,7 +40,7 @@ vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8 return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -48,7 +48,7 @@ vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8 return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -56,7 +56,7 @@ vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, v return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -64,7 +64,7 @@ vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuin return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -72,7 +72,7 @@ vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -80,7 +80,7 @@ vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -88,7 +88,7 @@ vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -96,7 +96,7 @@ vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuin return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -104,7 +104,7 @@ vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -112,7 +112,7 @@ vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } @@ -120,367 +120,367 @@ vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); } From 90f4ed0b961f7fb6af6c044f444081be66501e43 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 18:33:26 -0700 Subject: [PATCH 072/151] [vector-crypto] Append vs2 type in function name of vs variants of vaesef/vsaesem/vaesdf/vaesdm Signed-off-by: eop Chen --- .../templates/vector_crypto_template.py | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 766ab20f2..54f34edf4 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -156,12 +156,10 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): f"v{args['TYPE']}{args['SEW']}m{lmul_list[i]}_t" kwargs["vd"] = f"v{args['TYPE']}{args['SEW']}m{lmul_list[i]}_t" kwargs["vs2"] = f"v{args['TYPE']}{args['SEW']}m{args['LMUL']}_t" - args["LMUL"] = lmul_list[i] - G.func( - inst_info, - name="{OP}_{MNEMONIC}_{TYPE}{SEW}m{LMUL}".format_map(args) + - decorator.func_suffix, - **kwargs) + func_name = "{OP}_{MNEMONIC}_".format_map(args) +\ + f"{args['TYPE']}{args['SEW']}m{args['LMUL']}_" +\ + f"{args['TYPE']}{args['SEW']}m{lmul_list[i]}" + G.func(inst_info, name=func_name + decorator.func_suffix, **kwargs) else: if op == "vwsll": G.func( From f6fb07055432f22424f901065ccdbe1f142c26c1 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 18:34:18 -0700 Subject: [PATCH 073/151] [Auto-gen] Update documents under ../auto-generated/vector-crypto. (make git-commit-autogen-doc) --- .../vector-crypto/intrinsic_funcs.md | 180 +++++++++--------- ...d_-_nist_suite:_vector_aes_block_cipher.md | 150 +++++++-------- ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 30 +-- .../overloaded_intrinsic_funcs.md | 72 +++---- ...d_-_nist_suite:_vector_aes_block_cipher.md | 60 +++--- ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 12 +- .../policy_funcs/intrinsic_funcs.md | 180 +++++++++--------- ...d_-_nist_suite:_vector_aes_block_cipher.md | 150 +++++++-------- ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 30 +-- .../overloaded_intrinsic_funcs.md | 72 +++---- ...d_-_nist_suite:_vector_aes_block_cipher.md | 60 +++--- ...vksed_-_shangmi_suite:_sm4_block_cipher.md | 12 +- 12 files changed, 504 insertions(+), 504 deletions(-) diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 92c87a657..b5690f43c 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -646,45 +646,45 @@ vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); **Prototypes:** ``` C vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES Decryption](): @@ -692,45 +692,45 @@ vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl) **Prototypes:** ``` C vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES-128 Forward KeySchedule generation](): @@ -753,21 +753,21 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t ui **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesz_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: @@ -829,25 +829,25 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); **Prototypes:** ``` C vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ## Zvksh - ShangMi Suite: SM3 Secure Hash: diff --git a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index d4b9bff68..5a9f440a2 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -6,45 +6,45 @@ **Prototypes:** ``` C vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES Decryption](): @@ -52,45 +52,45 @@ vuint32m8_t __riscv_vaesem_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl) **Prototypes:** ``` C vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES-128 Forward KeySchedule generation](): @@ -113,19 +113,19 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t ui **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesz_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md index e2991d231..ad5aeec27 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -17,23 +17,23 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); **Prototypes:** ``` C vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 95d6da89f..6906c44bd 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -648,18 +648,18 @@ vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -668,18 +668,18 @@ vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -694,18 +694,18 @@ vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -714,18 +714,18 @@ vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -755,16 +755,16 @@ vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_ ``` C vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); @@ -831,18 +831,18 @@ vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index 53179ca9f..b750c129f 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -8,18 +8,18 @@ vuint32mf2_t __riscv_vaesef_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -28,18 +28,18 @@ vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -54,18 +54,18 @@ vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -74,18 +74,18 @@ vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -115,16 +115,16 @@ vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_ ``` C vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md index cd2448263..8b9eb1b2a 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -19,18 +19,18 @@ vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index f5d8ad3df..84544a85a 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -1052,45 +1052,45 @@ vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t v **Prototypes:** ``` C vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES Decryption](): @@ -1098,45 +1098,45 @@ vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t **Prototypes:** ``` C vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES-128 Forward KeySchedule generation](): @@ -1159,21 +1159,21 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesz_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: @@ -1235,25 +1235,25 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, s **Prototypes:** ``` C vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ## Zvksh - ShangMi Suite: SM3 Secure Hash: diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index 32e41b5ce..978cdee59 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -6,45 +6,45 @@ **Prototypes:** ``` C vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES Decryption](): @@ -52,45 +52,45 @@ vuint32m8_t __riscv_vaesem_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t **Prototypes:** ``` C vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` ### [Vector AES-128 Forward KeySchedule generation](): @@ -113,19 +113,19 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t **Prototypes:** ``` C -vuint32mf2_t __riscv_vaesz_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md index 2de962e21..49419391a 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -17,23 +17,23 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, s **Prototypes:** ``` C vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index b84966340..a1769992b 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -1054,18 +1054,18 @@ vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -1074,18 +1074,18 @@ vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -1100,18 +1100,18 @@ vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -1120,18 +1120,18 @@ vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -1161,16 +1161,16 @@ vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, si ``` C vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); @@ -1237,18 +1237,18 @@ vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uim vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md index 9a64c8e5d..6b16a5a48 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md @@ -8,18 +8,18 @@ vuint32mf2_t __riscv_vaesef_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -28,18 +28,18 @@ vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -54,18 +54,18 @@ vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -74,18 +74,18 @@ vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); @@ -115,16 +115,16 @@ vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, si ``` C vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md index 7487356cb..3129cb528 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md @@ -19,18 +19,18 @@ vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uim vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); From 71b10b70317a8c3b56954b40a11de08d89fa0327 Mon Sep 17 00:00:00 2001 From: eopXD Date: Tue, 1 Aug 2023 18:34:20 -0700 Subject: [PATCH 074/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- .../vector-crypto/api-testing/vaesdf.c | 60 +++++++++---------- .../vector-crypto/api-testing/vaesdm.c | 60 +++++++++---------- .../vector-crypto/api-testing/vaesef.c | 60 +++++++++---------- .../vector-crypto/api-testing/vaesem.c | 60 +++++++++---------- .../vector-crypto/api-testing/vaesz.c | 60 +++++++++---------- .../vector-crypto/api-testing/vsm4r.c | 60 +++++++++---------- .../vector-crypto/llvm-api-tests/vaesdf.c | 60 +++++++++---------- .../vector-crypto/llvm-api-tests/vaesdm.c | 60 +++++++++---------- .../vector-crypto/llvm-api-tests/vaesef.c | 60 +++++++++---------- .../vector-crypto/llvm-api-tests/vaesem.c | 60 +++++++++---------- .../vector-crypto/llvm-api-tests/vaesz.c | 60 +++++++++---------- .../vector-crypto/llvm-api-tests/vsm4r.c | 60 +++++++++---------- .../llvm-overloaded-tests/vaesdf.c | 30 +++++----- .../llvm-overloaded-tests/vaesdm.c | 30 +++++----- .../llvm-overloaded-tests/vaesef.c | 30 +++++----- .../llvm-overloaded-tests/vaesem.c | 30 +++++----- .../llvm-overloaded-tests/vaesz.c | 30 +++++----- .../llvm-overloaded-tests/vsm4r.c | 30 +++++----- .../overloaded-api-testing/vaesdf.c | 30 +++++----- .../overloaded-api-testing/vaesdm.c | 30 +++++----- .../overloaded-api-testing/vaesef.c | 30 +++++----- .../overloaded-api-testing/vaesem.c | 30 +++++----- .../overloaded-api-testing/vaesz.c | 30 +++++----- .../overloaded-api-testing/vsm4r.c | 30 +++++----- .../policy_funcs/api-testing/vaesdf.c | 60 +++++++++---------- .../policy_funcs/api-testing/vaesdm.c | 60 +++++++++---------- .../policy_funcs/api-testing/vaesef.c | 60 +++++++++---------- .../policy_funcs/api-testing/vaesem.c | 60 +++++++++---------- .../policy_funcs/api-testing/vaesz.c | 60 +++++++++---------- .../policy_funcs/api-testing/vsm4r.c | 60 +++++++++---------- .../policy_funcs/llvm-api-tests/vaesdf.c | 60 +++++++++---------- .../policy_funcs/llvm-api-tests/vaesdm.c | 60 +++++++++---------- .../policy_funcs/llvm-api-tests/vaesef.c | 60 +++++++++---------- .../policy_funcs/llvm-api-tests/vaesem.c | 60 +++++++++---------- .../policy_funcs/llvm-api-tests/vaesz.c | 60 +++++++++---------- .../policy_funcs/llvm-api-tests/vsm4r.c | 60 +++++++++---------- .../llvm-overloaded-tests/vaesdf.c | 30 +++++----- .../llvm-overloaded-tests/vaesdm.c | 30 +++++----- .../llvm-overloaded-tests/vaesef.c | 30 +++++----- .../llvm-overloaded-tests/vaesem.c | 30 +++++----- .../llvm-overloaded-tests/vaesz.c | 30 +++++----- .../llvm-overloaded-tests/vsm4r.c | 30 +++++----- .../overloaded-api-testing/vaesdf.c | 30 +++++----- .../overloaded-api-testing/vaesdm.c | 30 +++++----- .../overloaded-api-testing/vaesef.c | 30 +++++----- .../overloaded-api-testing/vaesem.c | 30 +++++----- .../overloaded-api-testing/vaesz.c | 30 +++++----- .../overloaded-api-testing/vsm4r.c | 30 +++++----- 48 files changed, 1080 insertions(+), 1080 deletions(-) diff --git a/auto-generated/vector-crypto/api-testing/vaesdf.c b/auto-generated/vector-crypto/api-testing/vaesdf.c index ec6e8b067..fac9c44ee 100644 --- a/auto-generated/vector-crypto/api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/api-testing/vaesdf.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesdm.c b/auto-generated/vector-crypto/api-testing/vaesdm.c index dd7a8ab52..17261e874 100644 --- a/auto-generated/vector-crypto/api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/api-testing/vaesdm.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesef.c b/auto-generated/vector-crypto/api-testing/vaesef.c index 3e26be98e..683ac6669 100644 --- a/auto-generated/vector-crypto/api-testing/vaesef.c +++ b/auto-generated/vector-crypto/api-testing/vaesef.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesef_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesem.c b/auto-generated/vector-crypto/api-testing/vaesem.c index b47a15900..dc67813e1 100644 --- a/auto-generated/vector-crypto/api-testing/vaesem.c +++ b/auto-generated/vector-crypto/api-testing/vaesem.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesem_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesz.c b/auto-generated/vector-crypto/api-testing/vaesz.c index cc4349b45..e5137944a 100644 --- a/auto-generated/vector-crypto/api-testing/vaesz.c +++ b/auto-generated/vector-crypto/api-testing/vaesz.c @@ -4,63 +4,63 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m8(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m8(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m8(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsm4r.c b/auto-generated/vector-crypto/api-testing/vsm4r.c index 7c5ff7a51..d690e5618 100644 --- a/auto-generated/vector-crypto/api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/api-testing/vsm4r.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c index 4c9faed7b..83739ef45 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c index 9cff36983..5f239f166 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c index 8c7ab8abf..326fae048 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesef_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c index d01b30f7c..9cdf5b35a 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesem_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c index aad378dba..8bc82d23d 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c @@ -5,63 +5,63 @@ #include -vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m8(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m8(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m8(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_u32m8(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c index a37d743e7..d44ca8d2d 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32mf2(vd, vs2, vl); +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m8(vd, vs2, vl); } vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m1(vd, vs2, vl); +vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m8(vd, vs2, vl); } vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m8(vd, vs2, vl); } vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c index 7126fd3d3..e019f6c4f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c index 6754c6e31..58f43cc58 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c index 076dadfed..e9f9544e9 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c index cd8da8835..1a9281fe7 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c index f459e124a..aa779b172 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c @@ -5,63 +5,63 @@ #include -vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c index 7a1d28756..e5e8d9c43 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c index a0bf2bc63..9807668e4 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c index bb7253273..d9cd8ced8 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c index df69a7db4..96380b425 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c index 89631199e..4539af8cd 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c index 92f09192f..cd9069a7e 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c @@ -4,63 +4,63 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaesz_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c index 95bd0716a..66735b96c 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c index 2eefcbc01..1bc0fc4a4 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c index 97ab441f7..fa4536189 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c index 2bcdbc400..d499b8720 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c index 0f179040e..345b93db5 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c index 4548ca4e0..57a9822f3 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c @@ -4,63 +4,63 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c index e12f9028d..996ca813c 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c @@ -8,79 +8,79 @@ vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c index 7b77ff31d..3c8991f42 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdf_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c index b7f84c4b7..51c81225a 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesdm_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c index c1debf192..fe895ad8b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesef_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c index e5dc8630c..6622e1f43 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesem_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c index 8d48a65f2..1c3ca8a3b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c @@ -5,63 +5,63 @@ #include -vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaesz_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c index 245a6ae12..52be1b25d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c @@ -9,79 +9,79 @@ vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32mf2_tu(vd, vs2, vl); +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32mf2_u32m8_tu(vd, vs2, vl); } vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m1_tu(vd, vs2, vl); +vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m1_u32m8_tu(vd, vs2, vl); } vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m2_tu(vd, vs2, vl); +vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m2_u32m8_tu(vd, vs2, vl); } vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m4_tu(vd, vs2, vl); +vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m4_u32m8_tu(vd, vs2, vl); } vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_tu(vd, vs2, vl); +vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4r_vs_u32m8_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c index 765042dee..6347f9767 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c index 0b7cc7547..002879bbd 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c index 236015c96..bb6ea6568 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c index 866bf2914..ec2d40643 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c index e9cd85400..0f3500c55 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c @@ -5,63 +5,63 @@ #include -vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c index f8612d784..598ea9f47 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c @@ -9,23 +9,23 @@ vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -33,19 +33,19 @@ vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -53,15 +53,15 @@ vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -69,11 +69,11 @@ vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -81,7 +81,7 @@ vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c index 51819499f..7e2d582b1 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c index ae17f9b58..191885804 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c index a46ede689..2230a962c 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c index a0930f52b..f0fff627e 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c index 40c8e9cb3..b4364cdd2 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c @@ -4,63 +4,63 @@ typedef _Float16 float16_t; typedef float float32_t; typedef double float64_t; -vuint32mf2_t test_vaesz_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c index abf418bb9..bf509ae52 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c @@ -8,23 +8,23 @@ vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t v return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -32,19 +32,19 @@ vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -52,15 +52,15 @@ vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -68,11 +68,11 @@ vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -80,7 +80,7 @@ vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } From 6ce812de55e882e73ef799694dde467b89b4d0c6 Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 3 Aug 2023 00:17:55 -0700 Subject: [PATCH 075/151] [vector-crypto] Add llvm test case header for vector crypto extensions Signed-off-by: eop Chen --- .../rvv_intrinsic_gen/generator.py | 33 +++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 4c1d4e117..0c0bf5669 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -456,7 +456,7 @@ def inst_group_prologue(self): def inst_group_epilogue(self): return "" - def write_file_header(self, has_float_type, has_bfloat16_type): + def write_file_header(self, has_float_type, has_bfloat16_type, name): #pylint: disable=line-too-long int_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ @@ -485,9 +485,38 @@ def write_file_header(self, has_float_type, has_bfloat16_type): r""" -Wno-psabi -O3 -fno-schedule-insns -fno-schedule-insns2" } */ """) + + vector_crypto_llvm_header = (r"""// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +""") + + def is_vector_crypto_inst(name): + vector_crypto_inst = [ + "vandn", "vbrev", "vbrev8", "vrev8", "vclz", "vctz", "vrol", "vror", + "vwsll", "vclmul", "vclmulh", "vghsh", "vgmul", "vaesef", "vaesem", + "vaesdf", "vaesdm", "vaeskf1", "vaeskf2", "vaesz", "vsha2ms", + "vsha2ch", "vsha2cl", "vsm4k", "vsm4r", "vsm3me", "vsm3c" + ] + for inst in vector_crypto_inst: + if inst in name: + return True + return False + if self.toolchain_type == ToolChainType.LLVM: if has_bfloat16_type: self.fd.write(bfloat16_llvm_header) + elif is_vector_crypto_inst(name): + self.fd.write(vector_crypto_llvm_header) elif has_float_type: self.fd.write(float_llvm_header) else: @@ -568,7 +597,7 @@ def func(self, inst_info, name, return_type, **kwargs): has_float_type = True if header: - self.write_file_header(has_float_type, has_bfloat16_type) + self.write_file_header(has_float_type, has_bfloat16_type, name) def output_call_arg(arg_name, type_name): if ((name.startswith("vget") or name.startswith("vset")) \ From 66016b4e3974747b4e77daa068a93a30278e864e Mon Sep 17 00:00:00 2001 From: eopXD Date: Thu, 3 Aug 2023 01:01:07 -0700 Subject: [PATCH 076/151] [Auto-gen] Update tests under ../auto-generated/vector-crypto. (make git-commit-autogen-test) --- auto-generated/vector-crypto/llvm-api-tests/vaesdf.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vaesdm.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vaesef.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vaesem.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vaesz.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vandn.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vbrev.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vbrev8.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vclmul.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vclmulh.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vclz.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vctz.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vghsh.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vgmul.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vrev8.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vrol.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vror.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vsm3c.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vsm3me.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vsm4k.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vsm4r.c | 9 ++++++++- auto-generated/vector-crypto/llvm-api-tests/vwsll.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vaesdf.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vaesdm.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vaesef.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vaesem.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vaeskf1.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vaeskf2.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vaesz.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vandn.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vbrev.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vbrev8.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vclmul.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vclmulh.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vclz.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vctz.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vghsh.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vgmul.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vrev8.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vrol.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vror.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vsha2ch.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vsha2cl.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vsha2ms.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vsm3c.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vsm3me.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vsm4k.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vsm4r.c | 9 ++++++++- .../vector-crypto/llvm-overloaded-tests/vwsll.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vaesef.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vaesem.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vaesz.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vandn.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vbrev.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vclmul.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vghsh.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vgmul.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vrev8.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vrol.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vror.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c | 9 ++++++++- .../vector-crypto/policy_funcs/llvm-api-tests/vwsll.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vaesdf.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vaesdm.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vaesef.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vaesem.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vaeskf1.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vaeskf2.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vaesz.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vandn.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vbrev.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vbrev8.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vclmul.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vclmulh.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vghsh.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vgmul.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vrev8.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vrol.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vror.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vsha2ch.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vsha2cl.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vsha2ms.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vsm3c.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vsm3me.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vsm4k.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vsm4r.c | 9 ++++++++- .../policy_funcs/llvm-overloaded-tests/vwsll.c | 9 ++++++++- 104 files changed, 832 insertions(+), 104 deletions(-) diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c index 83739ef45..04a638391 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c index 5f239f166..ba0c355d0 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c index 326fae048..0d1e8d720 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c index 9cdf5b35a..79d397e54 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c index f35c4a3b2..3b9857d2c 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c index fee669e56..fbb874289 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c index 8bc82d23d..d022831f1 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/llvm-api-tests/vandn.c index ac15e471b..f26790e7f 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vandn.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vandn.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/llvm-api-tests/vbrev.c index 26c4de404..aa1f7a0e2 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vbrev.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vbrev.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c index d22110c4f..2ac7b751b 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/llvm-api-tests/vclmul.c index a56321bd7..3751cde48 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclmul.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclmul.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c index 0772acf6d..5d0417a59 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclz.c b/auto-generated/vector-crypto/llvm-api-tests/vclz.c index 9ce26f56f..80e369c76 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclz.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vctz.c b/auto-generated/vector-crypto/llvm-api-tests/vctz.c index 504efd27a..74863e79c 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vctz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vctz.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/llvm-api-tests/vghsh.c index 71dcf52e5..436349fb9 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vghsh.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vghsh.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/llvm-api-tests/vgmul.c index a39f3c8c0..502aae3f8 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vgmul.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vgmul.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/llvm-api-tests/vrev8.c index f5d49ee05..d02393633 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vrev8.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vrev8.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/llvm-api-tests/vrol.c index 1154de852..d02ca2e49 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vrol.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vrol.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vror.c b/auto-generated/vector-crypto/llvm-api-tests/vror.c index 694b6e0e0..d800a671e 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vror.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vror.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c index 046495c35..b0a9e0220 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c index 442946790..ab5430e22 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c index 76cf625eb..0d65884e1 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c index 87af91e5c..c3589f4af 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c index ce4673c23..a286c7c26 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c index 0911bd722..33d5ab701 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c index d44ca8d2d..59aece912 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c index 70212a2c4..d7b1d42bb 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c index e019f6c4f..44803c47c 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c index 58f43cc58..bcb5a0b36 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c index e9f9544e9..2768f4a6b 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c index 1a9281fe7..1cbdce0bc 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c index e15daf77d..f1048cd5a 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c index 40dee84d2..3387d50cf 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c index aa779b172..93e2e3eef 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c index e2894d7e4..7a0f664b0 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c index 0c95750c7..579baaf6d 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c index d94465fe5..980e7f4ae 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c index 36cdfb21e..b8143f3f0 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c index f5343fa97..a17c3752e 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c index f1da0ff12..fb2c59218 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c index 2dc00bb3f..cfaa556ee 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c index 5940884a9..afbd4dbc8 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c index 4d254ff6c..a091c69a7 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c index 72738a4c7..3e29204b8 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c index 51fab3b0c..1007fbea4 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c index f5439c7ab..5221bd3aa 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c index 63d6c5aea..4c3c170db 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c index c16a3b774..ab349951b 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c index c795ac036..4f3c31a0d 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c index 416f7a64f..115cbf5f7 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c index 1c1ad44b4..3a37cbefc 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c index 319a815f8..5ae9fdf14 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c index e5e8d9c43..079db5b1f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c index ab2309ab1..eb6e5858f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c index 3c8991f42..7bf1af06b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c index 51c81225a..856cc6350 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c index fe895ad8b..a7ee09719 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c index 6622e1f43..3b398d5cc 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c index 36bc372bc..3a70a8170 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c index 16b8eaa79..583d8cf43 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c index 1c3ca8a3b..d5e3ba40f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c index cdd2befb5..e9f99c85f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c index fd694cc5a..fb68a2faf 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c index 1f0433554..ad555f360 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c index be7944419..0f6ab5547 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c index 053782475..0c9384bef 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c index e8271d882..a503594d6 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c index 9f725f34a..aec4008fb 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c index 69737f009..552b08f8e 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c index 088f1363a..cea862e0f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c index b7ea078c6..62ced63e1 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c index 42785c045..7b0921a22 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c index d3aa58e49..8920a97a6 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c index 1641cffed..9e7df01ff 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c index e64ed6ab7..0cda20a97 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c index f161c3a7c..a3687efb9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c index a702fee72..9c03f0061 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c index 52be1b25d..749e2f687 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c index f9415d873..390decdcb 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c index 6347f9767..33cff9e27 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c index 002879bbd..eb5e53de7 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c index bb6ea6568..00327588f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c index ec2d40643..fc86d2d5c 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c index 32fb898ca..8ffb3eaf4 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c index 9526a18de..3da580d32 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c index 0f3500c55..352ea15d6 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c index 298fa71e9..de8cd112d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c index c0a7edfac..f8c14b057 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c index fda1416d8..83967d94f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c index 15bc5f9df..e20f8bff1 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c index cdcb58c88..87cb1377d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c index 4c246ad78..526c3a33d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c index 5bad9f0f6..c5ba6e721 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c index 264f15a6b..a723aa6de 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c index a34a5be23..aa8ba847d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c index 5a7dad772..e3545be3c 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c index d83ec593f..d7e8bf814 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c index 7f9c2327b..0dc6ff651 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c index 6648d4381..b2193b03d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c index 9cbfd29cf..f32bff343 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c index 639a153fc..657a2aed2 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c index bd902bfba..f241b53f5 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c index 598ea9f47..8d08ad373 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c index dbe38e613..c6772c8c7 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c @@ -1,5 +1,12 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +experimental-zvbb \ +// RUN: -target-feature +experimental-zvbc \ +// RUN: -target-feature +experimental-zvkg \ +// RUN: -target-feature +experimental-zvkned \ +// RUN: -target-feature +experimental-zvknhb \ +// RUN: -target-feature +experimental-zvksed \ +// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s From e48b277f6aa035661f1bbf556bfc54969d7dc78e Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Wed, 10 Apr 2024 15:33:16 +0800 Subject: [PATCH 077/151] Update auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md Co-authored-by: Nicolas Brunie <82109999+nibrunieAtSi5@users.noreply.github.com> Signed-off-by: Kito Cheng --- .../00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md index 1778ca313..26b6260a4 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md @@ -235,7 +235,7 @@ vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl) vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); ``` -### [Vector Bit-manipulation used in Cryptography - Count Bits](): +### [Vector Basic Bit-manipulation - Count Bits](): **Prototypes:** ``` C From d87ddfcf3bebd6a1c2f3a51895d5ebd268b053ec Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Wed, 24 Apr 2024 07:52:41 -0700 Subject: [PATCH 078/151] Modify descriptions in vector_crypto_inst.py for correctness --- .../rvv_intrinsic_gen/vector_crypto_inst.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index d02482f37..210209181 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -24,7 +24,7 @@ def gen(g): g.function_group( vector_crypto_template, - "Vector Bit-manipulation used in Cryptography - Reverse Bits", + "Vector Basic Bit-manipulation - Reverse Bits in Elements", "", # FIXME: We probably have a separate document for vector-crypto ["vbrev", "vbrev8", "vrev8"], UITYPE, @@ -34,7 +34,7 @@ def gen(g): g.function_group( vector_crypto_template, - "Vector Bit-manipulation used in Cryptography - Count Bits", + "Vector Basic Bit-manipulation - Count Bits", "", # FIXME: We probably have a separate document for vector-crypto ["vclz", "vctz"], UITYPE, @@ -54,7 +54,7 @@ def gen(g): g.function_group( vector_crypto_template, - "Vector Bit-manipulation used in Cryptography - Shift", + "Vector Basic Bit-manipulation used - Widening Shift", "", # FIXME: We probably have a separate document for vector-crypto ["vwsll"], UITYPE, @@ -198,7 +198,7 @@ def gen(g): g.function_group( vector_crypto_template, - "Vector SM3 Message Expansion", + "Vector SM3 Compression", "", # FIXME: We probably have a separate document for vector-crypto ["vsm3c"], UITYPE, From b2b748b13b631b52755539feed20ea1327a49c3a Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Wed, 24 Apr 2024 09:31:33 -0700 Subject: [PATCH 079/151] Filter out LMUL=8 cases for .vs instructions --- .../rvv_intrinsic_gen/templates/vector_crypto_template.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 54f34edf4..28b5a466a 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -152,6 +152,9 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): starting_from_lmul_index = lmul_list.index(args["LMUL"]) # print(starting_from_lmul_index) for i in range(starting_from_lmul_index, len(lmul_list)): + if args["LMUL"] == 8: + continue + kwargs["return_type"] =\ f"v{args['TYPE']}{args['SEW']}m{lmul_list[i]}_t" kwargs["vd"] = f"v{args['TYPE']}{args['SEW']}m{lmul_list[i]}_t" From c379866019e8cfcc27f478c8a07db379c3e86881 Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Wed, 24 Apr 2024 09:32:10 -0700 Subject: [PATCH 080/151] Remove experimental for target-feature --- .../rvv_intrinsic_gen/generator.py | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 0c0bf5669..c2e27f798 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -488,13 +488,13 @@ def write_file_header(self, has_float_type, has_bfloat16_type, name): vector_crypto_llvm_header = (r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s From 5c38ed3c01dac96965046bba217752f242df5f31 Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Wed, 24 Apr 2024 09:34:46 -0700 Subject: [PATCH 081/151] Regenerate test cases and docs for vector crypto --- .../vector-crypto/api-testing/vaesdf.c | 8 - .../vector-crypto/api-testing/vaesdm.c | 8 - .../vector-crypto/api-testing/vaesef.c | 8 - .../vector-crypto/api-testing/vaesem.c | 8 - .../vector-crypto/api-testing/vaeskf1.c | 4 - .../vector-crypto/api-testing/vaeskf2.c | 4 - .../vector-crypto/api-testing/vaesz.c | 8 - .../vector-crypto/api-testing/vandn.c | 180 +- .../vector-crypto/api-testing/vbrev.c | 92 +- .../vector-crypto/api-testing/vbrev8.c | 92 +- .../vector-crypto/api-testing/vclmul.c | 36 +- .../vector-crypto/api-testing/vclmulh.c | 36 +- .../vector-crypto/api-testing/vclz.c | 92 +- .../vector-crypto/api-testing/vctz.c | 92 +- .../vector-crypto/api-testing/vghsh.c | 4 - .../vector-crypto/api-testing/vgmul.c | 4 - .../vector-crypto/api-testing/vrev8.c | 92 +- .../vector-crypto/api-testing/vrol.c | 180 +- .../vector-crypto/api-testing/vror.c | 180 +- .../vector-crypto/api-testing/vsha2ch.c | 4 - .../vector-crypto/api-testing/vsha2cl.c | 4 - .../vector-crypto/api-testing/vsha2ms.c | 4 - .../vector-crypto/api-testing/vsm3c.c | 4 - .../vector-crypto/api-testing/vsm3me.c | 4 - .../vector-crypto/api-testing/vsm4k.c | 4 - .../vector-crypto/api-testing/vsm4r.c | 8 - .../vector-crypto/api-testing/vwsll.c | 124 +- .../vector-crypto/intrinsic_funcs.md | 777 +++--- ...bit-manipulation_used_in_cryptography.adoc | 586 +++++ ...r_bit-manipulation_used_in_cryptography.md | 581 ----- ...vbc_-_vector_carryless_multiplication.adoc | 42 + ..._zvbc_-_vector_carryless_multiplication.md | 41 - ...gmac.md => 02_zvkg_-_vector_gcm_gmac.adoc} | 11 +- ..._nist_suite:_vector_aes_block_cipher.adoc} | 43 +- ...nist_suite:_vector_sha-2_secure_hash.adoc} | 20 +- ...ed_-_shangmi_suite:_sm4_block_cipher.adoc} | 21 +- ...ksh_-_shangmi_suite:_sm3_secure_hash.adoc} | 20 +- .../vector-crypto/llvm-api-tests/vaesdf.c | 64 +- .../vector-crypto/llvm-api-tests/vaesdm.c | 64 +- .../vector-crypto/llvm-api-tests/vaesef.c | 64 +- .../vector-crypto/llvm-api-tests/vaesem.c | 64 +- .../vector-crypto/llvm-api-tests/vaeskf1.c | 15 +- .../vector-crypto/llvm-api-tests/vaeskf2.c | 18 +- .../vector-crypto/llvm-api-tests/vaesz.c | 61 +- .../vector-crypto/llvm-api-tests/vandn.c | 244 +- .../vector-crypto/llvm-api-tests/vbrev.c | 103 +- .../vector-crypto/llvm-api-tests/vbrev8.c | 103 +- .../vector-crypto/llvm-api-tests/vclmul.c | 55 +- .../vector-crypto/llvm-api-tests/vclmulh.c | 55 +- .../vector-crypto/llvm-api-tests/vclz.c | 103 +- .../vector-crypto/llvm-api-tests/vctz.c | 103 +- .../vector-crypto/llvm-api-tests/vghsh.c | 30 +- .../vector-crypto/llvm-api-tests/vgmul.c | 18 +- .../vector-crypto/llvm-api-tests/vrev8.c | 103 +- .../vector-crypto/llvm-api-tests/vrol.c | 244 +- .../vector-crypto/llvm-api-tests/vror.c | 244 +- .../vector-crypto/llvm-api-tests/vsha2ch.c | 42 +- .../vector-crypto/llvm-api-tests/vsha2cl.c | 42 +- .../vector-crypto/llvm-api-tests/vsha2ms.c | 42 +- .../vector-crypto/llvm-api-tests/vsm3c.c | 18 +- .../vector-crypto/llvm-api-tests/vsm3me.c | 18 +- .../vector-crypto/llvm-api-tests/vsm4k.c | 15 +- .../vector-crypto/llvm-api-tests/vsm4r.c | 64 +- .../vector-crypto/llvm-api-tests/vwsll.c | 168 +- .../llvm-overloaded-tests/vaesdf.c | 64 +- .../llvm-overloaded-tests/vaesdm.c | 64 +- .../llvm-overloaded-tests/vaesef.c | 64 +- .../llvm-overloaded-tests/vaesem.c | 64 +- .../llvm-overloaded-tests/vaeskf1.c | 15 +- .../llvm-overloaded-tests/vaeskf2.c | 18 +- .../llvm-overloaded-tests/vaesz.c | 61 +- .../llvm-overloaded-tests/vandn.c | 244 +- .../llvm-overloaded-tests/vbrev.c | 103 +- .../llvm-overloaded-tests/vbrev8.c | 103 +- .../llvm-overloaded-tests/vclmul.c | 55 +- .../llvm-overloaded-tests/vclmulh.c | 55 +- .../llvm-overloaded-tests/vclz.c | 103 +- .../llvm-overloaded-tests/vctz.c | 103 +- .../llvm-overloaded-tests/vghsh.c | 30 +- .../llvm-overloaded-tests/vgmul.c | 18 +- .../llvm-overloaded-tests/vrev8.c | 103 +- .../llvm-overloaded-tests/vrol.c | 244 +- .../llvm-overloaded-tests/vror.c | 244 +- .../llvm-overloaded-tests/vsha2ch.c | 42 +- .../llvm-overloaded-tests/vsha2cl.c | 42 +- .../llvm-overloaded-tests/vsha2ms.c | 42 +- .../llvm-overloaded-tests/vsm3c.c | 18 +- .../llvm-overloaded-tests/vsm3me.c | 18 +- .../llvm-overloaded-tests/vsm4k.c | 15 +- .../llvm-overloaded-tests/vsm4r.c | 64 +- .../llvm-overloaded-tests/vwsll.c | 168 +- .../overloaded-api-testing/vaesdf.c | 8 - .../overloaded-api-testing/vaesdm.c | 8 - .../overloaded-api-testing/vaesef.c | 8 - .../overloaded-api-testing/vaesem.c | 8 - .../overloaded-api-testing/vaeskf1.c | 4 - .../overloaded-api-testing/vaeskf2.c | 4 - .../overloaded-api-testing/vaesz.c | 8 - .../overloaded-api-testing/vandn.c | 180 +- .../overloaded-api-testing/vbrev.c | 92 +- .../overloaded-api-testing/vbrev8.c | 92 +- .../overloaded-api-testing/vclmul.c | 36 +- .../overloaded-api-testing/vclmulh.c | 36 +- .../overloaded-api-testing/vclz.c | 92 +- .../overloaded-api-testing/vctz.c | 92 +- .../overloaded-api-testing/vghsh.c | 4 - .../overloaded-api-testing/vgmul.c | 4 - .../overloaded-api-testing/vrev8.c | 92 +- .../overloaded-api-testing/vrol.c | 180 +- .../overloaded-api-testing/vror.c | 180 +- .../overloaded-api-testing/vsha2ch.c | 4 - .../overloaded-api-testing/vsha2cl.c | 4 - .../overloaded-api-testing/vsha2ms.c | 4 - .../overloaded-api-testing/vsm3c.c | 4 - .../overloaded-api-testing/vsm3me.c | 4 - .../overloaded-api-testing/vsm4k.c | 4 - .../overloaded-api-testing/vsm4r.c | 8 - .../overloaded-api-testing/vwsll.c | 124 +- .../overloaded_intrinsic_funcs.md | 777 +++--- ...bit-manipulation_used_in_cryptography.adoc | 586 +++++ ...r_bit-manipulation_used_in_cryptography.md | 581 ----- ...vbc_-_vector_carryless_multiplication.adoc | 42 + ..._zvbc_-_vector_carryless_multiplication.md | 41 - ...gmac.md => 02_zvkg_-_vector_gcm_gmac.adoc} | 11 +- ..._nist_suite:_vector_aes_block_cipher.adoc} | 43 +- ...nist_suite:_vector_sha-2_secure_hash.adoc} | 20 +- ...ed_-_shangmi_suite:_sm4_block_cipher.adoc} | 21 +- ...ksh_-_shangmi_suite:_sm3_secure_hash.adoc} | 20 +- .../policy_funcs/api-testing/vaesdf.c | 8 - .../policy_funcs/api-testing/vaesdm.c | 8 - .../policy_funcs/api-testing/vaesef.c | 8 - .../policy_funcs/api-testing/vaesem.c | 8 - .../policy_funcs/api-testing/vaeskf1.c | 24 +- .../policy_funcs/api-testing/vaeskf2.c | 4 - .../policy_funcs/api-testing/vaesz.c | 8 - .../policy_funcs/api-testing/vandn.c | 708 +++--- .../policy_funcs/api-testing/vbrev.c | 356 ++- .../policy_funcs/api-testing/vbrev8.c | 356 ++- .../policy_funcs/api-testing/vclmul.c | 132 +- .../policy_funcs/api-testing/vclmulh.c | 132 +- .../policy_funcs/api-testing/vghsh.c | 4 - .../policy_funcs/api-testing/vgmul.c | 4 - .../policy_funcs/api-testing/vrev8.c | 356 ++- .../policy_funcs/api-testing/vrol.c | 708 +++--- .../policy_funcs/api-testing/vror.c | 708 +++--- .../policy_funcs/api-testing/vsha2ch.c | 4 - .../policy_funcs/api-testing/vsha2cl.c | 4 - .../policy_funcs/api-testing/vsha2ms.c | 4 - .../policy_funcs/api-testing/vsm3c.c | 4 - .../policy_funcs/api-testing/vsm3me.c | 24 +- .../policy_funcs/api-testing/vsm4k.c | 24 +- .../policy_funcs/api-testing/vsm4r.c | 8 - .../policy_funcs/api-testing/vwsll.c | 484 ++-- .../policy_funcs/intrinsic_funcs.md | 2151 +++++++++-------- ...bit-manipulation_used_in_cryptography.adoc | 958 ++++++++ ...r_bit-manipulation_used_in_cryptography.md | 953 -------- ...vbc_-_vector_carryless_multiplication.adoc | 76 + ..._zvbc_-_vector_carryless_multiplication.md | 75 - ...gmac.md => 02_zvkg_-_vector_gcm_gmac.adoc} | 11 +- ..._nist_suite:_vector_aes_block_cipher.adoc} | 53 +- ...nist_suite:_vector_sha-2_secure_hash.adoc} | 20 +- ...ed_-_shangmi_suite:_sm4_block_cipher.adoc} | 31 +- ...vksh_-_shangmi_suite:_sm3_secure_hash.adoc | 26 + ..._zvksh_-_shangmi_suite:_sm3_secure_hash.md | 24 - .../policy_funcs/llvm-api-tests/vaesdf.c | 19 +- .../policy_funcs/llvm-api-tests/vaesdm.c | 19 +- .../policy_funcs/llvm-api-tests/vaesef.c | 19 +- .../policy_funcs/llvm-api-tests/vaesem.c | 19 +- .../policy_funcs/llvm-api-tests/vaeskf1.c | 35 +- .../policy_funcs/llvm-api-tests/vaeskf2.c | 15 +- .../policy_funcs/llvm-api-tests/vaesz.c | 19 +- .../policy_funcs/llvm-api-tests/vandn.c | 719 +++--- .../policy_funcs/llvm-api-tests/vbrev.c | 367 ++- .../policy_funcs/llvm-api-tests/vbrev8.c | 367 ++- .../policy_funcs/llvm-api-tests/vclmul.c | 143 +- .../policy_funcs/llvm-api-tests/vclmulh.c | 143 +- .../policy_funcs/llvm-api-tests/vghsh.c | 15 +- .../policy_funcs/llvm-api-tests/vgmul.c | 15 +- .../policy_funcs/llvm-api-tests/vrev8.c | 367 ++- .../policy_funcs/llvm-api-tests/vrol.c | 719 +++--- .../policy_funcs/llvm-api-tests/vror.c | 719 +++--- .../policy_funcs/llvm-api-tests/vsha2ch.c | 15 +- .../policy_funcs/llvm-api-tests/vsha2cl.c | 15 +- .../policy_funcs/llvm-api-tests/vsha2ms.c | 15 +- .../policy_funcs/llvm-api-tests/vsm3c.c | 15 +- .../policy_funcs/llvm-api-tests/vsm3me.c | 35 +- .../policy_funcs/llvm-api-tests/vsm4k.c | 35 +- .../policy_funcs/llvm-api-tests/vsm4r.c | 19 +- .../policy_funcs/llvm-api-tests/vwsll.c | 495 ++-- .../llvm-overloaded-tests/vaesdf.c | 76 +- .../llvm-overloaded-tests/vaesdm.c | 76 +- .../llvm-overloaded-tests/vaesef.c | 76 +- .../llvm-overloaded-tests/vaesem.c | 76 +- .../llvm-overloaded-tests/vaeskf1.c | 40 +- .../llvm-overloaded-tests/vaeskf2.c | 30 +- .../llvm-overloaded-tests/vaesz.c | 61 +- .../llvm-overloaded-tests/vandn.c | 952 +++++--- .../llvm-overloaded-tests/vbrev.c | 436 ++-- .../llvm-overloaded-tests/vbrev8.c | 436 ++-- .../llvm-overloaded-tests/vclmul.c | 191 +- .../llvm-overloaded-tests/vclmulh.c | 195 +- .../llvm-overloaded-tests/vghsh.c | 30 +- .../llvm-overloaded-tests/vgmul.c | 18 +- .../llvm-overloaded-tests/vrev8.c | 436 ++-- .../policy_funcs/llvm-overloaded-tests/vrol.c | 928 ++++--- .../policy_funcs/llvm-overloaded-tests/vror.c | 928 ++++--- .../llvm-overloaded-tests/vsha2ch.c | 42 +- .../llvm-overloaded-tests/vsha2cl.c | 42 +- .../llvm-overloaded-tests/vsha2ms.c | 42 +- .../llvm-overloaded-tests/vsm3c.c | 18 +- .../llvm-overloaded-tests/vsm3me.c | 40 +- .../llvm-overloaded-tests/vsm4k.c | 36 +- .../llvm-overloaded-tests/vsm4r.c | 64 +- .../llvm-overloaded-tests/vwsll.c | 652 +++-- .../overloaded-api-testing/vaesdf.c | 67 +- .../overloaded-api-testing/vaesdm.c | 67 +- .../overloaded-api-testing/vaesef.c | 67 +- .../overloaded-api-testing/vaesem.c | 67 +- .../overloaded-api-testing/vaeskf1.c | 31 +- .../overloaded-api-testing/vaeskf2.c | 21 +- .../overloaded-api-testing/vaesz.c | 52 +- .../overloaded-api-testing/vandn.c | 943 +++++--- .../overloaded-api-testing/vbrev.c | 427 ++-- .../overloaded-api-testing/vbrev8.c | 427 ++-- .../overloaded-api-testing/vclmul.c | 182 +- .../overloaded-api-testing/vclmulh.c | 186 +- .../overloaded-api-testing/vghsh.c | 21 +- .../overloaded-api-testing/vgmul.c | 9 +- .../overloaded-api-testing/vrev8.c | 427 ++-- .../overloaded-api-testing/vrol.c | 919 ++++--- .../overloaded-api-testing/vror.c | 919 ++++--- .../overloaded-api-testing/vsha2ch.c | 33 +- .../overloaded-api-testing/vsha2cl.c | 33 +- .../overloaded-api-testing/vsha2ms.c | 33 +- .../overloaded-api-testing/vsm3c.c | 9 +- .../overloaded-api-testing/vsm3me.c | 31 +- .../overloaded-api-testing/vsm4k.c | 27 +- .../overloaded-api-testing/vsm4r.c | 55 +- .../overloaded-api-testing/vwsll.c | 643 +++-- .../overloaded_intrinsic_funcs.md | 2151 +++++++++-------- ...bit-manipulation_used_in_cryptography.adoc | 958 ++++++++ ...r_bit-manipulation_used_in_cryptography.md | 953 -------- ...vbc_-_vector_carryless_multiplication.adoc | 76 + ..._zvbc_-_vector_carryless_multiplication.md | 75 - ...gmac.md => 02_zvkg_-_vector_gcm_gmac.adoc} | 11 +- ..._nist_suite:_vector_aes_block_cipher.adoc} | 53 +- ...nist_suite:_vector_sha-2_secure_hash.adoc} | 20 +- ...ed_-_shangmi_suite:_sm4_block_cipher.adoc} | 31 +- ...vksh_-_shangmi_suite:_sm3_secure_hash.adoc | 26 + ..._zvksh_-_shangmi_suite:_sm3_secure_hash.md | 24 - 250 files changed, 21273 insertions(+), 18731 deletions(-) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc delete mode 100644 auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md create mode 100644 auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc delete mode 100644 auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md rename auto-generated/vector-crypto/intrinsic_funcs/{02_zvkg_-_vector_gcm_gmac.md => 02_zvkg_-_vector_gcm_gmac.adoc} (92%) rename auto-generated/vector-crypto/intrinsic_funcs/{03_zvkned_-_nist_suite:_vector_aes_block_cipher.md => 03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc} (92%) rename auto-generated/vector-crypto/intrinsic_funcs/{04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md => 04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc} (93%) rename auto-generated/vector-crypto/intrinsic_funcs/{05_zvksed_-_shangmi_suite:_sm4_block_cipher.md => 05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc} (89%) rename auto-generated/vector-crypto/intrinsic_funcs/{06_zvksh_-_shangmi_suite:_sm3_secure_hash.md => 06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc} (84%) create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc delete mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc delete mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md rename auto-generated/vector-crypto/overloaded_intrinsic_funcs/{02_zvkg_-_vector_gcm_gmac.md => 02_zvkg_-_vector_gcm_gmac.adoc} (91%) rename auto-generated/vector-crypto/overloaded_intrinsic_funcs/{03_zvkned_-_nist_suite:_vector_aes_block_cipher.md => 03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc} (91%) rename auto-generated/vector-crypto/overloaded_intrinsic_funcs/{04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md => 04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc} (92%) rename auto-generated/vector-crypto/overloaded_intrinsic_funcs/{05_zvksed_-_shangmi_suite:_sm4_block_cipher.md => 05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc} (88%) rename auto-generated/vector-crypto/overloaded_intrinsic_funcs/{06_zvksh_-_shangmi_suite:_sm3_secure_hash.md => 06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc} (82%) create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc delete mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc delete mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md rename auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/{02_zvkg_-_vector_gcm_gmac.md => 02_zvkg_-_vector_gcm_gmac.adoc} (91%) rename auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/{03_zvkned_-_nist_suite:_vector_aes_block_cipher.md => 03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc} (87%) rename auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/{04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md => 04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc} (93%) rename auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/{05_zvksed_-_shangmi_suite:_sm4_block_cipher.md => 05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc} (68%) create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc delete mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc delete mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc delete mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md rename auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/{02_zvkg_-_vector_gcm_gmac.md => 02_zvkg_-_vector_gcm_gmac.adoc} (90%) rename auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/{03_zvkned_-_nist_suite:_vector_aes_block_cipher.md => 03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc} (86%) rename auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/{04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md => 04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc} (92%) rename auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/{05_zvksed_-_shangmi_suite:_sm4_block_cipher.md => 05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc} (67%) create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc delete mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md diff --git a/auto-generated/vector-crypto/api-testing/vaesdf.c b/auto-generated/vector-crypto/api-testing/vaesdf.c index fac9c44ee..e5b912a42 100644 --- a/auto-generated/vector-crypto/api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/api-testing/vaesdf.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t v vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/api-testing/vaesdm.c b/auto-generated/vector-crypto/api-testing/vaesdm.c index 17261e874..903beeddf 100644 --- a/auto-generated/vector-crypto/api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/api-testing/vaesdm.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t v vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/api-testing/vaesef.c b/auto-generated/vector-crypto/api-testing/vaesef.c index 683ac6669..375059d4d 100644 --- a/auto-generated/vector-crypto/api-testing/vaesef.c +++ b/auto-generated/vector-crypto/api-testing/vaesef.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32mf2(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t v vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/api-testing/vaesem.c b/auto-generated/vector-crypto/api-testing/vaesem.c index dc67813e1..76aa9d61b 100644 --- a/auto-generated/vector-crypto/api-testing/vaesem.c +++ b/auto-generated/vector-crypto/api-testing/vaesem.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32mf2(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t v vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/api-testing/vaeskf1.c b/auto-generated/vector-crypto/api-testing/vaeskf1.c index 0d55e93ac..a6f2fbd00 100644 --- a/auto-generated/vector-crypto/api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/api-testing/vaeskf1.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vaeskf2.c b/auto-generated/vector-crypto/api-testing/vaeskf2.c index 7509d6775..060b9874f 100644 --- a/auto-generated/vector-crypto/api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/api-testing/vaeskf2.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32mf2(vd, vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m8(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vaesz.c b/auto-generated/vector-crypto/api-testing/vaesz.c index e5137944a..f3c6760ce 100644 --- a/auto-generated/vector-crypto/api-testing/vaesz.c +++ b/auto-generated/vector-crypto/api-testing/vaesz.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_vs_u32mf2_u32mf2(vd, vs2, vl); } @@ -59,8 +56,3 @@ vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m4_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/api-testing/vandn.c b/auto-generated/vector-crypto/api-testing/vandn.c index 50ca46138..7400c8a58 100644 --- a/auto-generated/vector-crypto/api-testing/vandn.c +++ b/auto-generated/vector-crypto/api-testing/vandn.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf8(vs2, vs1, vl); } @@ -180,179 +177,178 @@ vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_m(mask, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_m(mask, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_m(mask, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_m(mask, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_m(mask, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_m(mask, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_m(mask, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_m(mask, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_m(mask, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_m(mask, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_m(mask, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_m(mask, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_m(mask, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_m(mask, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_m(mask, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_m(mask, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_m(mask, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_m(mask, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_m(mask, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_m(mask, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_m(mask, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_m(mask, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_m(mask, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_m(mask, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_m(mask, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_m(mask, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_m(mask, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_m(mask, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_m(mask, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_m(mask, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_m(mask, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_m(mask, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_m(mask, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_m(mask, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_m(mask, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_m(mask, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vbrev.c b/auto-generated/vector-crypto/api-testing/vbrev.c index 97d4855ac..fd22f6114 100644 --- a/auto-generated/vector-crypto/api-testing/vbrev.c +++ b/auto-generated/vector-crypto/api-testing/vbrev.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf8(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vbrev_v_u64m8(vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vbrev8.c b/auto-generated/vector-crypto/api-testing/vbrev8.c index 323154304..6d29c2665 100644 --- a/auto-generated/vector-crypto/api-testing/vbrev8.c +++ b/auto-generated/vector-crypto/api-testing/vbrev8.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m8(vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vclmul.c b/auto-generated/vector-crypto/api-testing/vclmul.c index 615da37c2..3fd21fa7f 100644 --- a/auto-generated/vector-crypto/api-testing/vclmul.c +++ b/auto-generated/vector-crypto/api-testing/vclmul.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m1(vs2, vs1, vl); } @@ -36,35 +33,34 @@ vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8(vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vclmulh.c b/auto-generated/vector-crypto/api-testing/vclmulh.c index 37795dc1a..a4c69311e 100644 --- a/auto-generated/vector-crypto/api-testing/vclmulh.c +++ b/auto-generated/vector-crypto/api-testing/vclmulh.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m1(vs2, vs1, vl); } @@ -36,35 +33,34 @@ vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m8(vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vclz.c b/auto-generated/vector-crypto/api-testing/vclz.c index 655af1c63..1fa92a927 100644 --- a/auto-generated/vector-crypto/api-testing/vclz.c +++ b/auto-generated/vector-crypto/api-testing/vclz.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vclz_v_u8mf8(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vclz_v_u64m8(vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vclz_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vclz_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vclz_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vclz_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vclz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vclz_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vclz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vclz_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vclz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vclz_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vclz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vclz_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vclz_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vclz_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vclz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vclz_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vclz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vclz_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vclz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vclz_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vclz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vclz_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vclz_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vclz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vclz_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vclz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vclz_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vclz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vclz_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vclz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vclz_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vclz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vclz_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vclz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vclz_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vclz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vclz_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vclz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vctz.c b/auto-generated/vector-crypto/api-testing/vctz.c index 262e6be9b..eadb46e90 100644 --- a/auto-generated/vector-crypto/api-testing/vctz.c +++ b/auto-generated/vector-crypto/api-testing/vctz.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vctz_v_u8mf8(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vctz_v_u64m8(vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vctz_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vctz_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vctz_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vctz_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vctz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vctz_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vctz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vctz_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vctz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vctz_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vctz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vctz_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vctz_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vctz_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vctz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vctz_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vctz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vctz_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vctz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vctz_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vctz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vctz_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vctz_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vctz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vctz_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vctz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vctz_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vctz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vctz_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vctz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vctz_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vctz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vctz_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vctz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vctz_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vctz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vctz_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vctz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vghsh.c b/auto-generated/vector-crypto/api-testing/vghsh.c index b93ebfa2f..accbf01e5 100644 --- a/auto-generated/vector-crypto/api-testing/vghsh.c +++ b/auto-generated/vector-crypto/api-testing/vghsh.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32mf2(vd, vs2, vs1, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1 vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m8(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vgmul.c b/auto-generated/vector-crypto/api-testing/vgmul.c index 09521d4d0..4d9028a54 100644 --- a/auto-generated/vector-crypto/api-testing/vgmul.c +++ b/auto-generated/vector-crypto/api-testing/vgmul.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vgmul_vv_u32mf2(vd, vs2, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vgmul_vv_u32m8(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vrev8.c b/auto-generated/vector-crypto/api-testing/vrev8.c index 9d2ea220c..c0b367a61 100644 --- a/auto-generated/vector-crypto/api-testing/vrev8.c +++ b/auto-generated/vector-crypto/api-testing/vrev8.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf8(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vrev8_v_u64m8(vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vrol.c b/auto-generated/vector-crypto/api-testing/vrol.c index 41fdc7637..f4ee9ffbb 100644 --- a/auto-generated/vector-crypto/api-testing/vrol.c +++ b/auto-generated/vector-crypto/api-testing/vrol.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf8(vs2, vs1, vl); } @@ -180,179 +177,178 @@ vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_m(mask, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_m(mask, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_m(mask, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_m(mask, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_m(mask, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_m(mask, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_m(mask, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_m(mask, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_m(mask, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_m(mask, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_m(mask, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_m(mask, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_m(mask, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_m(mask, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_m(mask, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_m(mask, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_m(mask, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_m(mask, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_m(mask, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_m(mask, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_m(mask, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_m(mask, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_m(mask, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_m(mask, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_m(mask, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_m(mask, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_m(mask, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_m(mask, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_m(mask, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_m(mask, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_m(mask, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_m(mask, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_m(mask, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_m(mask, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_m(mask, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_m(mask, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vror.c b/auto-generated/vector-crypto/api-testing/vror.c index c00b0b98e..9c8f32431 100644 --- a/auto-generated/vector-crypto/api-testing/vror.c +++ b/auto-generated/vector-crypto/api-testing/vror.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vror_vv_u8mf8(vs2, vs1, vl); } @@ -180,179 +177,178 @@ vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_m(mask, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_m(mask, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_m(mask, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_m(mask, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_m(mask, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_m(mask, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_m(mask, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_m(mask, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_m(mask, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_m(mask, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_m(mask, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_m(mask, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_m(mask, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_m(mask, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_m(mask, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_m(mask, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_m(mask, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_m(mask, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_m(mask, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_m(mask, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_m(mask, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_m(mask, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_m(mask, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_m(mask, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_m(mask, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_m(mask, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_m(mask, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_m(mask, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_m(mask, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_m(mask, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_m(mask, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_m(mask, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_m(mask, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_m(mask, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_m(mask, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_m(mask, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vsha2ch.c b/auto-generated/vector-crypto/api-testing/vsha2ch.c index 8407a75e1..89c32480f 100644 --- a/auto-generated/vector-crypto/api-testing/vsha2ch.c +++ b/auto-generated/vector-crypto/api-testing/vsha2ch.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t v vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m8(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vsha2cl.c b/auto-generated/vector-crypto/api-testing/vsha2cl.c index e7a37c2e7..f213d6477 100644 --- a/auto-generated/vector-crypto/api-testing/vsha2cl.c +++ b/auto-generated/vector-crypto/api-testing/vsha2cl.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t v vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m8(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vsha2ms.c b/auto-generated/vector-crypto/api-testing/vsha2ms.c index 65b6fc728..77ef0289a 100644 --- a/auto-generated/vector-crypto/api-testing/vsha2ms.c +++ b/auto-generated/vector-crypto/api-testing/vsha2ms.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t v vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m8(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vsm3c.c b/auto-generated/vector-crypto/api-testing/vsm3c.c index 355f4a519..67d0f776f 100644 --- a/auto-generated/vector-crypto/api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/api-testing/vsm3c.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m8(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vsm3me.c b/auto-generated/vector-crypto/api-testing/vsm3me.c index 5dd3d4007..5307ba8bb 100644 --- a/auto-generated/vector-crypto/api-testing/vsm3me.c +++ b/auto-generated/vector-crypto/api-testing/vsm3me.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32mf2(vs2, vs1, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m8(vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vsm4k.c b/auto-generated/vector-crypto/api-testing/vsm4k.c index d038e7157..a33e29d8a 100644 --- a/auto-generated/vector-crypto/api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/api-testing/vsm4k.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32mf2(vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m8(vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/api-testing/vsm4r.c b/auto-generated/vector-crypto/api-testing/vsm4r.c index d690e5618..b0c2fdfe1 100644 --- a/auto-generated/vector-crypto/api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/api-testing/vsm4r.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/api-testing/vwsll.c b/auto-generated/vector-crypto/api-testing/vwsll.c index 270591974..5e6a1a884 100644 --- a/auto-generated/vector-crypto/api-testing/vwsll.c +++ b/auto-generated/vector-crypto/api-testing/vwsll.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl); } @@ -124,123 +121,122 @@ vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8(vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index b5690f43c..4b6c01fc4 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -1,10 +1,11 @@ -## Zvbb - Vector Bit-manipulation used in Cryptography: +=== Zvbb - Vector Bit-manipulation used in Cryptography -### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): +[[]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not -**Prototypes:** -``` C +[,c] +---- vuint8mf8_t __riscv_vandn_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); vuint8mf8_t __riscv_vandn_vx_u8mf8 (vuint8mf8_t vs2, uint8_t rs1, size_t vl); vuint8mf4_t __riscv_vandn_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); @@ -50,56 +51,57 @@ vuint64m4_t __riscv_vandn_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); vuint64m8_t __riscv_vandn_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vandn_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): - -**Prototypes:** -``` C +vuint8mf8_t __riscv_vandn_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation - Reverse Bits in Elements + +[,c] +---- vuint8mf8_t __riscv_vbrev_v_u8mf8 (vuint8mf8_t vs2, size_t vl); vuint8mf4_t __riscv_vbrev_v_u8mf4 (vuint8mf4_t vs2, size_t vl); vuint8mf2_t __riscv_vbrev_v_u8mf2 (vuint8mf2_t vs2, size_t vl); @@ -167,78 +169,79 @@ vuint64m2_t __riscv_vrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); vuint64m4_t __riscv_vrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); vuint64m8_t __riscv_vrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Count Bits](): - -**Prototypes:** -``` C +vuint8mf8_t __riscv_vbrev_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation - Count Bits + +[,c] +---- vuint8mf8_t __riscv_vclz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); vuint8mf4_t __riscv_vclz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); vuint8mf2_t __riscv_vclz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); @@ -284,56 +287,57 @@ vuint64m2_t __riscv_vctz_v_u64m2 (vuint64m2_t vs2, size_t vl); vuint64m4_t __riscv_vctz_v_u64m4 (vuint64m4_t vs2, size_t vl); vuint64m8_t __riscv_vctz_v_u64m8 (vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Rotate](): - -**Prototypes:** -``` C +vuint8mf8_t __riscv_vclz_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- vuint8mf8_t __riscv_vrol_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); vuint8mf8_t __riscv_vrol_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); vuint8mf4_t __riscv_vrol_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); @@ -423,100 +427,101 @@ vuint64m4_t __riscv_vror_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vror_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vror_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Shift](): - -**Prototypes:** -``` C +vuint8mf8_t __riscv_vrol_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); @@ -548,44 +553,45 @@ vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -``` - -## Zvbc - Vector Carryless Multiplication: - -### [Vector Carryless Multiplication](): - -**Prototypes:** -``` C +vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +---- + +=== Zvbc - Vector Carryless Multiplication + +[[]] +==== Vector Carryless Multiplication + +[,c] +---- vuint64m1_t __riscv_vclmul_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); vuint64m1_t __riscv_vclmul_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); vuint64m2_t __riscv_vclmul_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); @@ -603,30 +609,31 @@ vuint64m4_t __riscv_vclmulh_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); vuint64m8_t __riscv_vclmulh_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vclmulh_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` - -## Zvkg - Vector GCM/GMAC: - -### [Vector GCM/GMAC](): - -**Prototypes:** -``` C +vuint64m1_t __riscv_vclmul_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +=== Zvkg - Vector GCM/GMAC + +[[]] +==== Vector GCM/GMAC + +[,c] +---- vuint32mf2_t __riscv_vghsh_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vghsh_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vghsh_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -637,14 +644,15 @@ vuint32m1_t __riscv_vgmul_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vgmul_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvkned - NIST Suite: Vector AES Block Cipher: +=== Zvkned - NIST Suite: Vector AES Block Cipher -### [Vector AES Encryption](): +[[]] +==== Vector AES Encryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -664,7 +672,6 @@ vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -684,13 +691,13 @@ vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES Decryption](): +[[]] +==== Vector AES Decryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -710,7 +717,6 @@ vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -730,13 +736,13 @@ vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES-128 Forward KeySchedule generation](): +[[]] +==== Vector AES-128 Forward KeySchedule generation -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaeskf1_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vaeskf1_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf1_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); @@ -747,12 +753,13 @@ vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t ui vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector AES round zero](): +[[]] +==== Vector AES round zero -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); @@ -767,15 +774,15 @@ vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_ vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash -### [Vector SHA-2 message schedule](): +[[]] +==== Vector SHA-2 message schedule -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ms_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ms_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ms_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -785,12 +792,13 @@ vuint64m1_t __riscv_vsha2ms_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1 vuint64m2_t __riscv_vsha2ms_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2ms_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2ms_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -### [Vector SHA-2 two rounds of compression](): +[[]] +==== Vector SHA-2 two rounds of compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ch_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ch_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ch_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -809,25 +817,27 @@ vuint64m1_t __riscv_vsha2cl_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1 vuint64m2_t __riscv_vsha2cl_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2cl_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -## Zvksed - ShangMi Suite: SM4 Block Cipher: +=== Zvksed - ShangMi Suite: SM4 Block Cipher -### [Vector SM4 KeyExpansion](): +[[]] +==== Vector SM4 KeyExpansion -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4k_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm4k_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm4k_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm4k_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector SM4 Rounds](): +[[]] +==== Vector SM4 Rounds -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -847,29 +857,30 @@ vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvksh - ShangMi Suite: SM3 Secure Hash: +=== Zvksh - ShangMi Suite: SM3 Secure Hash -### [Vector SM3 Message Expansion](): +[[]] +==== Vector SM3 Message Expansion -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3me_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsm3me_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsm3me_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); vuint32m4_t __riscv_vsm3me_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint32m8_t __riscv_vsm3me_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -``` +---- -### [Vector SM3 Message Expansion](): +[[]] +==== Vector SM3 Compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3c_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm3c_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm3c_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm3c_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm3c_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc new file mode 100644 index 000000000..25b9c4d67 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -0,0 +1,586 @@ + +=== Zvbb - Vector Bit-manipulation used in Cryptography + +[[]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not + +[,c] +---- +vuint8mf8_t __riscv_vandn_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8 (vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4 (vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2 (vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1 (vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2 (vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4 (vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8 (vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4 (vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2 (vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1 (vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2 (vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4 (vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8 (vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2 (vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1 (vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2 (vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4 (vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8 (vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation - Reverse Bits in Elements + +[,c] +---- +vuint8mf8_t __riscv_vbrev_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation - Count Bits + +[,c] +---- +vuint8mf8_t __riscv_vclz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- +vuint8mf8_t __riscv_vrol_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- +vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md deleted file mode 100644 index 26b6260a4..000000000 --- a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ /dev/null @@ -1,581 +0,0 @@ - -## Zvbb - Vector Bit-manipulation used in Cryptography: - -### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vandn_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8 (vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4 (vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2 (vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1 (vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2 (vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4 (vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8 (vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4 (vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2 (vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1 (vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2 (vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4 (vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8 (vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2 (vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1 (vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2 (vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4 (vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8 (vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vbrev_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Basic Bit-manipulation - Count Bits](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vclz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8 (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Rotate](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vrol_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_m (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_m (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_m (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_m (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_m (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_m (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Shift](): - -**Prototypes:** -``` C -vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc b/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc new file mode 100644 index 000000000..6e9c0a1b9 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc @@ -0,0 +1,42 @@ + +=== Zvbc - Vector Carryless Multiplication + +[[]] +==== Vector Carryless Multiplication + +[,c] +---- +vuint64m1_t __riscv_vclmul_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md b/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md deleted file mode 100644 index 4d41e53cc..000000000 --- a/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md +++ /dev/null @@ -1,41 +0,0 @@ - -## Zvbc - Vector Carryless Multiplication: - -### [Vector Carryless Multiplication](): - -**Prototypes:** -``` C -vuint64m1_t __riscv_vclmul_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_m (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_m (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_m (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_m (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_m (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` diff --git a/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md b/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc similarity index 92% rename from auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md rename to auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc index 5e3e8fcf8..83f9816cb 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc @@ -1,10 +1,11 @@ -## Zvkg - Vector GCM/GMAC: +=== Zvkg - Vector GCM/GMAC -### [Vector GCM/GMAC](): +[[]] +==== Vector GCM/GMAC -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vghsh_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vghsh_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vghsh_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -15,4 +16,4 @@ vuint32m1_t __riscv_vgmul_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vgmul_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc similarity index 92% rename from auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md rename to auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc index 5a9f440a2..929328cba 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc @@ -1,10 +1,11 @@ -## Zvkned - NIST Suite: Vector AES Block Cipher: +=== Zvkned - NIST Suite: Vector AES Block Cipher -### [Vector AES Encryption](): +[[]] +==== Vector AES Encryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -24,7 +25,6 @@ vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -44,13 +44,13 @@ vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES Decryption](): +[[]] +==== Vector AES Decryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -70,7 +70,6 @@ vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -90,13 +89,13 @@ vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES-128 Forward KeySchedule generation](): +[[]] +==== Vector AES-128 Forward KeySchedule generation -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaeskf1_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vaeskf1_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf1_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); @@ -107,12 +106,13 @@ vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t ui vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector AES round zero](): +[[]] +==== Vector AES round zero -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); @@ -127,5 +127,4 @@ vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_ vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md b/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc similarity index 93% rename from auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md rename to auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc index 90db92cd4..6ce0c9cf6 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc @@ -1,10 +1,11 @@ -## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash -### [Vector SHA-2 message schedule](): +[[]] +==== Vector SHA-2 message schedule -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ms_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ms_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ms_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -14,12 +15,13 @@ vuint64m1_t __riscv_vsha2ms_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1 vuint64m2_t __riscv_vsha2ms_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2ms_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2ms_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -### [Vector SHA-2 two rounds of compression](): +[[]] +==== Vector SHA-2 two rounds of compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ch_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ch_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ch_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -38,4 +40,4 @@ vuint64m1_t __riscv_vsha2cl_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1 vuint64m2_t __riscv_vsha2cl_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2cl_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc similarity index 89% rename from auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md rename to auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc index ad5aeec27..55a267250 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc @@ -1,21 +1,23 @@ -## Zvksed - ShangMi Suite: SM4 Block Cipher: +=== Zvksed - ShangMi Suite: SM4 Block Cipher -### [Vector SM4 KeyExpansion](): +[[]] +==== Vector SM4 KeyExpansion -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4k_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm4k_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm4k_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm4k_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector SM4 Rounds](): +[[]] +==== Vector SM4 Rounds -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -35,5 +37,4 @@ vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md b/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc similarity index 84% rename from auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md rename to auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc index 621c42e24..a83f0b809 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md +++ b/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc @@ -1,24 +1,26 @@ -## Zvksh - ShangMi Suite: SM3 Secure Hash: +=== Zvksh - ShangMi Suite: SM3 Secure Hash -### [Vector SM3 Message Expansion](): +[[]] +==== Vector SM3 Message Expansion -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3me_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsm3me_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsm3me_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); vuint32m4_t __riscv_vsm3me_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint32m8_t __riscv_vsm3me_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -``` +---- -### [Vector SM3 Message Expansion](): +[[]] +==== Vector SM3 Compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3c_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm3c_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm3c_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm3c_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm3c_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c index 04a638391..715c7881c 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m8(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m8(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c index ba0c355d0..c35b87b37 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m8(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m8(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c index 0d1e8d720..081cfe140 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m8(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m8(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c index 79d397e54..cf43774f1 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m8(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m8(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c index 3b9857d2c..b92fbdead 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -31,4 +31,3 @@ vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32m8(vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c index fbb874289..aa796c5b2 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32mf2(vd, vs2, 0, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m8(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c index d022831f1..bdb19ece1 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c @@ -1,74 +1,83 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m8(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m8(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m8(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m4_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/llvm-api-tests/vandn.c index f26790e7f..3f8f4c0a5 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vandn.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vandn.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -68,7 +68,8 @@ vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m8(vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf4(vs2, vs1, vl); } @@ -76,7 +77,8 @@ vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16mf4(vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf2(vs2, vs1, vl); } @@ -116,7 +118,8 @@ vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8(vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32mf2(vs2, vs1, vl); } @@ -188,179 +191,222 @@ vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_m(mask, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_m(mask, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_m(mask, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_m(mask, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_m(mask, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_m(mask, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_m(mask, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vandn_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_m(mask, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_m(mask, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vandn_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_m(mask, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_m(mask, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vandn_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_m(mask, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_m(mask, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vandn_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_m(mask, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_m(mask, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_m(mask, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_m(mask, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_m(mask, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_m(mask, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_m(mask, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_m(mask, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vandn_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_m(mask, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_m(mask, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vandn_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_m(mask, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_m(mask, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vandn_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_m(mask, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_m(mask, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_m(mask, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_m(mask, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_m(mask, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_m(mask, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_m(mask, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_m(mask, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vandn_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_m(mask, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_m(mask, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vandn_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_m(mask, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vandn_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vandn_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vandn_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vandn_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vandn_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/llvm-api-tests/vbrev.c index aa1f7a0e2..602551b22 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vbrev.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vbrev.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vbrev_v_u64m8(vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c index 2ac7b751b..dbb64b45e 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m8(vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/llvm-api-tests/vclmul.c index 3751cde48..d6697a372 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclmul.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclmul.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -44,35 +44,42 @@ vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8(vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c index 5d0417a59..94fbc51e7 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -44,35 +44,42 @@ vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m8(vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclz.c b/auto-generated/vector-crypto/llvm-api-tests/vclz.c index 80e369c76..6320cf1a7 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclz.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vclz_v_u64m8(vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vclz_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vclz_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vclz_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vclz_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vclz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vclz_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vclz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vclz_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vclz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vclz_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vclz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vclz_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vclz_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vclz_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vclz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vclz_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vclz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vclz_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vclz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vclz_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vclz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vclz_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vclz_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vclz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vclz_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vclz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vclz_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vclz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vclz_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vclz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vclz_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vclz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vclz_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vclz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vclz_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vclz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vclz_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vclz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vctz.c b/auto-generated/vector-crypto/llvm-api-tests/vctz.c index 74863e79c..926741260 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vctz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vctz.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vctz_v_u64m8(vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vctz_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vctz_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vctz_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vctz_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vctz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vctz_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vctz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vctz_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vctz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vctz_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vctz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vctz_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vctz_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vctz_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vctz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vctz_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vctz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vctz_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vctz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vctz_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vctz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vctz_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vctz_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vctz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vctz_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vctz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vctz_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vctz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vctz_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vctz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vctz_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vctz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vctz_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vctz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vctz_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vctz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vctz_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vctz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/llvm-api-tests/vghsh.c index 436349fb9..6b2db98f6 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vghsh.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vghsh.c @@ -1,34 +1,38 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32mf2(vd, vs2, vs1, vl); } -vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m1(vd, vs2, vs1, vl); } -vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m2(vd, vs2, vs1, vl); } -vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m4(vd, vs2, vs1, vl); } -vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m8(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/llvm-api-tests/vgmul.c index 502aae3f8..1abf16248 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vgmul.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vgmul.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vgmul_vv_u32mf2(vd, vs2, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vgmul_vv_u32m8(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/llvm-api-tests/vrev8.c index d02393633..717dfd27d 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vrev8.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vrev8.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vrev8_v_u64m8(vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_m(mask, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_m(vm, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_m(mask, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_m(vm, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_m(mask, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_m(vm, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_m(mask, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_m(vm, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_m(mask, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_m(vm, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_m(mask, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_m(vm, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_m(mask, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_m(vm, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_m(mask, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_m(vm, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_m(mask, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_m(vm, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_m(mask, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_m(vm, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_m(mask, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_m(vm, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_m(mask, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_m(vm, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_m(mask, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_m(vm, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_m(mask, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_m(vm, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_m(mask, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_m(vm, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_m(mask, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_m(vm, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_m(mask, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_m(vm, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_m(mask, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_m(vm, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_m(mask, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_m(vm, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_m(mask, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_m(vm, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_m(mask, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_m(vm, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_m(mask, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_m(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/llvm-api-tests/vrol.c index d02ca2e49..1bddb1516 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vrol.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vrol.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -68,7 +68,8 @@ vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m8(vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf4(vs2, vs1, vl); } @@ -76,7 +77,8 @@ vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4(vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf2(vs2, vs1, vl); } @@ -116,7 +118,8 @@ vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m8(vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32mf2(vs2, vs1, vl); } @@ -188,179 +191,222 @@ vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_m(mask, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vrol_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_m(mask, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_m(mask, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vrol_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_m(mask, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_m(mask, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vrol_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_m(mask, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_m(mask, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vrol_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_m(mask, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_m(mask, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vrol_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_m(mask, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_m(mask, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vrol_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_m(mask, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_m(mask, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vrol_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_m(mask, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_m(mask, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_m(mask, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_m(mask, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_m(mask, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_m(mask, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vrol_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_m(mask, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_m(mask, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vrol_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_m(mask, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_m(mask, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vrol_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_m(mask, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_m(mask, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vrol_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_m(mask, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_m(mask, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_m(mask, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_m(mask, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vrol_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_m(mask, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_m(mask, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vrol_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_m(mask, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_m(mask, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vrol_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_m(mask, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_m(mask, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vrol_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_m(mask, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vrol_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vrol_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vrol_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vrol_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vror.c b/auto-generated/vector-crypto/llvm-api-tests/vror.c index d800a671e..073c1fe05 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vror.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vror.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -68,7 +68,8 @@ vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8m8(vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf4(vs2, vs1, vl); } @@ -76,7 +77,8 @@ vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4(vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf2(vs2, vs1, vl); } @@ -116,7 +118,8 @@ vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m8(vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u32mf2(vs2, vs1, vl); } @@ -188,179 +191,222 @@ vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_m(mask, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vror_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_m(mask, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_m(mask, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vror_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_m(mask, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_m(mask, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vror_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_m(mask, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_m(mask, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vror_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_m(mask, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_m(mask, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vror_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_m(mask, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_m(mask, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vror_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_m(mask, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_m(mask, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vror_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_m(mask, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_m(mask, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_m(mask, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_m(mask, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_m(mask, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_m(mask, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vror_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_m(mask, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_m(mask, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vror_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_m(mask, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_m(mask, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vror_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_m(mask, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_m(mask, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vror_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_m(mask, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_m(mask, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_m(mask, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_m(mask, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vror_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_m(mask, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_m(mask, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vror_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_m(mask, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_m(mask, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vror_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_m(mask, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_m(mask, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vror_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_m(mask, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vror_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vror_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vror_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vror_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c index b0a9e0220..78924df94 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m1(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m2(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m4(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m8(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m1(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m2(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m4(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m8(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c index ab5430e22..739a9da5e 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m1(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m2(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m4(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m8(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m1(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m2(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m4(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m8(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c index 0d65884e1..72201942a 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m1(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m2(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m4(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m8(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m1(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m2(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m4(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m8(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c index c3589f4af..06ae64701 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m8(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c index a286c7c26..9aefcd323 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsm3me_vv_u32mf2(vs2, vs1, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m8(vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c index 33d5ab701..e5f6bd386 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -31,4 +31,3 @@ vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32m8(vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c index 59aece912..8119a4331 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m8(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m8(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m4_u32m8(vd, vs2, vl); } vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8(vd, vs2, vl); } - -vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_u32m8(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c index d7b1d42bb..eda2e00d3 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -60,7 +60,8 @@ vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8(vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl); } @@ -132,123 +133,152 @@ vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8(vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_m(mask, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_m(mask, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_m(mask, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_m(mask, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_m(mask, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_m(mask, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_m(mask, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vwsll_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_m(mask, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_m(mask, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vwsll_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_m(mask, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_m(mask, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vwsll_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_m(mask, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_m(mask, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_m(mask, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_m(mask, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_m(mask, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_m(mask, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_m(mask, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_m(mask, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vwsll_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_m(mask, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_m(mask, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vwsll_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_m(mask, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_m(mask, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_m(mask, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_m(mask, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_m(mask, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_m(mask, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_m(mask, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_m(mask, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vwsll_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_m(mask, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_vx_u64m8_m(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c index 44803c47c..83837f66d 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } - -vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c index bcb5a0b36..6bc6faa5b 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } - -vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c index 2768f4a6b..a42aac84e 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } - -vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c index 1cbdce0bc..2cb5113a7 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } - -vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c index f1048cd5a..393a2329f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -31,4 +31,3 @@ vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c index 3387d50cf..e1d85453a 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf2(vd, vs2, 0, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c index 93e2e3eef..b98fe52ba 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c @@ -1,74 +1,83 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } - -vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c index 7a0f664b0..302997b03 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -68,7 +68,8 @@ vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn(vs2, vs1, vl); } @@ -76,7 +77,8 @@ vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn(vs2, vs1, vl); } @@ -116,7 +118,8 @@ vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn(vs2, vs1, vl); } @@ -188,179 +191,222 @@ vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c index 579baaf6d..9654a13b5 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vbrev(vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c index 980e7f4ae..68503540f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8(vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c index b8143f3f0..994a54025 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -44,35 +44,42 @@ vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul(vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul(mask, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul(mask, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul(mask, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul(mask, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul(mask, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul(mask, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul(mask, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul(mask, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c index a17c3752e..fbfa406f6 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -44,35 +44,42 @@ vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh(vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c index fb2c59218..c6a727dfc 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vclz(vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8m1_t test_vclz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8m2_t test_vclz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8m4_t test_vclz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8m8_t test_vclz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16m1_t test_vclz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16m2_t test_vclz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16m4_t test_vclz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16m8_t test_vclz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32m1_t test_vclz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32m2_t test_vclz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32m4_t test_vclz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32m8_t test_vclz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint64m1_t test_vclz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint64m2_t test_vclz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint64m4_t test_vclz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint64m8_t test_vclz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c index cfaa556ee..10223ef94 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vctz(vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8m1_t test_vctz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8m2_t test_vctz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8m4_t test_vctz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8m8_t test_vctz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16m1_t test_vctz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16m2_t test_vctz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16m4_t test_vctz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16m8_t test_vctz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32m1_t test_vctz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32m2_t test_vctz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32m4_t test_vctz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32m8_t test_vctz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint64m1_t test_vctz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint64m2_t test_vctz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint64m4_t test_vctz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint64m8_t test_vctz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c index afbd4dbc8..bd18a2c0e 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c @@ -1,34 +1,38 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } -vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } -vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } -vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } -vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c index a091c69a7..ed81badec 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vgmul(vd, vs2, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vgmul(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c index 3e29204b8..6f491581c 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -100,91 +100,90 @@ vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vrev8(vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c index 1007fbea4..2e24afa14 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -68,7 +68,8 @@ vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol(vs2, vs1, vl); } @@ -76,7 +77,8 @@ vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol(vs2, vs1, vl); } @@ -116,7 +118,8 @@ vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol(vs2, vs1, vl); } @@ -188,179 +191,222 @@ vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c index 5221bd3aa..6fdd3e527 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -68,7 +68,8 @@ vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror(vs2, vs1, vl); } @@ -76,7 +77,8 @@ vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror(vs2, vs1, vl); } @@ -116,7 +118,8 @@ vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror(vs2, vs1, vl); } @@ -188,179 +191,222 @@ vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c index 4c3c170db..2924cdc47 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c index ab349951b..b2078e33d 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c index 4f3c31a0d..e1afaede7 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c index 115cbf5f7..3d23e0142 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c index 3a37cbefc..86f271de7 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsm3me(vs2, vs1, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { return __riscv_vsm3me(vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c index 5ae9fdf14..248207cfc 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -31,4 +31,3 @@ vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c index 079db5b1f..6cb46317c 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } - -vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c index eb6e5858f..029180986 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -60,7 +60,8 @@ vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } @@ -132,123 +133,152 @@ vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c index 9807668e4..a240f30cd 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t v vuint32m8_t test_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } - -vuint32m8_t test_vaesdf_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c index d9cd8ced8..44e4a38fb 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t v vuint32m8_t test_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } - -vuint32m8_t test_vaesdm_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c index 96380b425..8a032c2f8 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t v vuint32m8_t test_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } - -vuint32m8_t test_vaesef_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c index 4539af8cd..e6f666ea6 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t v vuint32m8_t test_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } - -vuint32m8_t test_vaesem_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c index 8ec38cde4..73358e70e 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c index 94ff06c1a..a15310d57 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2(vd, vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c index cd9069a7e..76a5d32fc 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } @@ -59,8 +56,3 @@ vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz(vd, vs2, vl); } - -vuint32m8_t test_vaesz_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vandn.c b/auto-generated/vector-crypto/overloaded-api-testing/vandn.c index e744cd9fe..61d7a594f 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vandn.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vandn.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vandn(vs2, vs1, vl); } @@ -180,179 +177,178 @@ vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn(mask, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn(mask, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c b/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c index 8c82c5496..5a27daa73 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vbrev(vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev(mask, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c b/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c index 5785a810f..9d0d77b91 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8(vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8(mask, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c b/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c index f751b2175..cf48adf9c 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { return __riscv_vclmul(vs2, vs1, vl); } @@ -36,35 +33,34 @@ vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul(vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul(mask, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul(mask, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul(mask, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul(mask, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul(mask, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul(mask, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul(mask, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul(mask, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c b/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c index c7a9d9d6d..7000a93e5 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { return __riscv_vclmulh(vs2, vs1, vl); } @@ -36,35 +33,34 @@ vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh(vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh(mask, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclz.c b/auto-generated/vector-crypto/overloaded-api-testing/vclz.c index 8bea51126..d93faf0f3 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vclz.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclz.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vclz(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vclz_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vclz(vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8mf8_t test_vclz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8mf4_t test_vclz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8mf2_t test_vclz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8m1_t test_vclz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8m2_t test_vclz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8m4_t test_vclz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint8m8_t test_vclz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16mf4_t test_vclz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16mf2_t test_vclz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16m1_t test_vclz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16m2_t test_vclz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16m4_t test_vclz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint16m8_t test_vclz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32mf2_t test_vclz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32m1_t test_vclz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32m2_t test_vclz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32m4_t test_vclz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint32m8_t test_vclz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint64m1_t test_vclz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint64m2_t test_vclz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint64m4_t test_vclz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vclz(mask, vs2, vl); +vuint64m8_t test_vclz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vctz.c b/auto-generated/vector-crypto/overloaded-api-testing/vctz.c index 86090d8aa..51d6c57e9 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vctz.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vctz.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vctz(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vctz_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vctz(vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8mf8_t test_vctz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8mf4_t test_vctz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8mf2_t test_vctz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8m1_t test_vctz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8m2_t test_vctz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8m4_t test_vctz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint8m8_t test_vctz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16mf4_t test_vctz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16mf2_t test_vctz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16m1_t test_vctz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16m2_t test_vctz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16m4_t test_vctz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint16m8_t test_vctz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32mf2_t test_vctz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32m1_t test_vctz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32m2_t test_vctz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32m4_t test_vctz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint32m8_t test_vctz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint64m1_t test_vctz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint64m2_t test_vctz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint64m4_t test_vctz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vctz(mask, vs2, vl); +vuint64m8_t test_vctz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c b/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c index 8a4eb46a5..055ce6727 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1 vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c b/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c index 48c480933..4067ca01b 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vgmul(vd, vs2, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vgmul(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c b/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c index d013b9218..3391569f2 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8(vs2, vl); } @@ -92,91 +89,90 @@ vuint64m8_t test_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl) { return __riscv_vrev8(vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8(mask, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8(vm, vs2, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vrol.c b/auto-generated/vector-crypto/overloaded-api-testing/vrol.c index dda6195ca..a1900207c 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vrol.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vrol.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vrol(vs2, vs1, vl); } @@ -180,179 +177,178 @@ vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol(mask, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol(mask, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vror.c b/auto-generated/vector-crypto/overloaded-api-testing/vror.c index 600fc1d66..e87ad43c8 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vror.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vror.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vror(vs2, vs1, vl); } @@ -180,179 +177,178 @@ vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_m(vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_m(vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_m(vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_m(vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_m(vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_m(vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_m(vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_m(vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_m(vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_m(vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_m(vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_m(vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_m(vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror(mask, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_m(vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror(mask, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c index e581f6f43..d04129849 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t v vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c index 9a839357b..4de7b49aa 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t v vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c index c6d912d62..70a696804 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t v vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c index a5bdb447f..728566e46 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c index 60d967f88..299159174 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsm3me(vs2, vs1, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { vuint32m8_t test_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { return __riscv_vsm3me(vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c index 06728e8dd..882694054 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c index 66735b96c..cb106c8a5 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl vuint32m8_t test_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } - -vuint32m8_t test_vsm4r_vs_u32m8_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c b/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c index f90328a94..8696b7d1d 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } @@ -124,123 +121,122 @@ vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll(mask, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll(mask, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll(vm, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 6906c44bd..63ef95b6a 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -1,10 +1,11 @@ -## Zvbb - Vector Bit-manipulation used in Cryptography: +=== Zvbb - Vector Bit-manipulation used in Cryptography -### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): +[[overloaded-]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not -**Prototypes:** -``` C +[,c] +---- vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, uint8_t rs1, size_t vl); vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); @@ -50,56 +51,57 @@ vuint64m4_t __riscv_vandn (vuint64m4_t vs2, uint64_t rs1, size_t vl); vuint64m8_t __riscv_vandn (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vandn (vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn (vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn (vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn (vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn (vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn (vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn (vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn (vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn (vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn (vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn (vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn (vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn (vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn (vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn (vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn (vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn (vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn (vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn (vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): - -**Prototypes:** -``` C +vuint8mf8_t __riscv_vandn (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn (vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn (vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn (vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn (vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn (vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn (vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn (vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn (vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn (vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn (vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn (vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn (vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn (vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn (vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn (vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn (vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn (vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn (vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation - Reverse Bits in Elements + +[,c] +---- vuint8mf8_t __riscv_vbrev (vuint8mf8_t vs2, size_t vl); vuint8mf4_t __riscv_vbrev (vuint8mf4_t vs2, size_t vl); vuint8mf2_t __riscv_vbrev (vuint8mf2_t vs2, size_t vl); @@ -167,78 +169,79 @@ vuint64m2_t __riscv_vrev8 (vuint64m2_t vs2, size_t vl); vuint64m4_t __riscv_vrev8 (vuint64m4_t vs2, size_t vl); vuint64m8_t __riscv_vrev8 (vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8 (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8 (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8 (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8 (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8 (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8 (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8 (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8 (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8 (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8 (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8 (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8 (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8 (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8 (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8 (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8 (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8 (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8 (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8 (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8 (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8 (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8 (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8 (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8 (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8 (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8 (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8 (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8 (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8 (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8 (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8 (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8 (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8 (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8 (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8 (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8 (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8 (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8 (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8 (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8 (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8 (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8 (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8 (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8 (vbool8_t mask, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Count Bits](): - -**Prototypes:** -``` C +vuint8mf8_t __riscv_vbrev (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8 (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8 (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8 (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8 (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8 (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8 (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8 (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8 (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8 (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8 (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8 (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8 (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8 (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8 (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8 (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8 (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8 (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8 (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8 (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8 (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8 (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8 (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8 (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8 (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8 (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8 (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8 (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8 (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8 (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8 (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8 (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8 (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8 (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8 (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8 (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8 (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8 (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8 (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8 (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8 (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8 (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8 (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation - Count Bits + +[,c] +---- vuint8mf8_t __riscv_vclz (vuint8mf8_t vs2, size_t vl); vuint8mf4_t __riscv_vclz (vuint8mf4_t vs2, size_t vl); vuint8mf2_t __riscv_vclz (vuint8mf2_t vs2, size_t vl); @@ -284,56 +287,57 @@ vuint64m2_t __riscv_vctz (vuint64m2_t vs2, size_t vl); vuint64m4_t __riscv_vctz (vuint64m4_t vs2, size_t vl); vuint64m8_t __riscv_vctz (vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vclz (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz (vbool8_t mask, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Rotate](): - -**Prototypes:** -``` C +vuint8mf8_t __riscv_vclz (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, size_t rs1, size_t vl); vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); @@ -423,100 +427,101 @@ vuint64m4_t __riscv_vror (vuint64m4_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vror (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vror (vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Shift](): - -**Prototypes:** -``` C +vuint8mf8_t __riscv_vrol (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, size_t rs1, size_t vl); vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); @@ -548,44 +553,45 @@ vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -``` - -## Zvbc - Vector Carryless Multiplication: - -### [Vector Carryless Multiplication](): - -**Prototypes:** -``` C +vuint16mf4_t __riscv_vwsll (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +---- + +=== Zvbc - Vector Carryless Multiplication + +[[overloaded-]] +==== Vector Carryless Multiplication + +[,c] +---- vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, uint64_t rs1, size_t vl); vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); @@ -603,30 +609,31 @@ vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, uint64_t rs1, size_t vl); vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` - -## Zvkg - Vector GCM/GMAC: - -### [Vector GCM/GMAC](): - -**Prototypes:** -``` C +vuint64m1_t __riscv_vclmul (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +=== Zvkg - Vector GCM/GMAC + +[[overloaded-]] +==== Vector GCM/GMAC + +[,c] +---- vuint32mf2_t __riscv_vghsh (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vghsh (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vghsh (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -637,14 +644,15 @@ vuint32m1_t __riscv_vgmul (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vgmul (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvkned - NIST Suite: Vector AES Block Cipher: +=== Zvkned - NIST Suite: Vector AES Block Cipher -### [Vector AES Encryption](): +[[overloaded-]] +==== Vector AES Encryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesef_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -664,7 +672,6 @@ vuint32m4_t __riscv_vaesef_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -684,13 +691,13 @@ vuint32m4_t __riscv_vaesem_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES Decryption](): +[[overloaded-]] +==== Vector AES Decryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesdf_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -710,7 +717,6 @@ vuint32m4_t __riscv_vaesdf_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -730,13 +736,13 @@ vuint32m4_t __riscv_vaesdm_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES-128 Forward KeySchedule generation](): +[[overloaded-]] +==== Vector AES-128 Forward KeySchedule generation -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaeskf1 (vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vaeskf1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf1 (vuint32m2_t vs2, size_t uimm, size_t vl); @@ -747,12 +753,13 @@ vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_ vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector AES round zero](): +[[overloaded-]] +==== Vector AES round zero -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); @@ -767,15 +774,15 @@ vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash -### [Vector SHA-2 message schedule](): +[[overloaded-]] +==== Vector SHA-2 message schedule -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ms (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ms (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ms (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -785,12 +792,13 @@ vuint64m1_t __riscv_vsha2ms (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, s vuint64m2_t __riscv_vsha2ms (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2ms (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2ms (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -### [Vector SHA-2 two rounds of compression](): +[[overloaded-]] +==== Vector SHA-2 two rounds of compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ch (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ch (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ch (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -809,25 +817,27 @@ vuint64m1_t __riscv_vsha2cl (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, s vuint64m2_t __riscv_vsha2cl (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2cl (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -## Zvksed - ShangMi Suite: SM4 Block Cipher: +=== Zvksed - ShangMi Suite: SM4 Block Cipher -### [Vector SM4 KeyExpansion](): +[[overloaded-]] +==== Vector SM4 KeyExpansion -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4k (vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm4k (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm4k (vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm4k (vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector SM4 Rounds](): +[[overloaded-]] +==== Vector SM4 Rounds -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -847,29 +857,30 @@ vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvksh - ShangMi Suite: SM3 Secure Hash: +=== Zvksh - ShangMi Suite: SM3 Secure Hash -### [Vector SM3 Message Expansion](): +[[overloaded-]] +==== Vector SM3 Message Expansion -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3me (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsm3me (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsm3me (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); vuint32m4_t __riscv_vsm3me (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint32m8_t __riscv_vsm3me (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -``` +---- -### [Vector SM3 Message Expansion](): +[[overloaded-]] +==== Vector SM3 Compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3c (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm3c (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm3c (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm3c (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm3c (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc new file mode 100644 index 000000000..15ebc0022 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -0,0 +1,586 @@ + +=== Zvbb - Vector Bit-manipulation used in Cryptography + +[[overloaded-]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not + +[,c] +---- +vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn (vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn (vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn (vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn (vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn (vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn (vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn (vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn (vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn (vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn (vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn (vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn (vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn (vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn (vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn (vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn (vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn (vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn (vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn (vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn (vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn (vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn (vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn (vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn (vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn (vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn (vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn (vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn (vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn (vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn (vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation - Reverse Bits in Elements + +[,c] +---- +vuint8mf8_t __riscv_vbrev (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8 (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8 (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8 (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8 (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8 (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8 (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8 (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8 (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8 (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8 (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8 (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8 (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8 (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8 (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8 (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8 (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8 (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8 (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8 (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8 (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8 (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8 (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8 (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8 (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8 (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8 (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8 (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8 (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8 (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8 (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8 (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8 (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8 (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8 (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8 (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8 (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8 (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8 (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8 (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8 (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8 (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8 (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation - Count Bits + +[,c] +---- +vuint8mf8_t __riscv_vclz (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- +vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror (vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror (vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror (vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror (vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror (vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror (vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror (vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- +vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md deleted file mode 100644 index dfe321e52..000000000 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ /dev/null @@ -1,581 +0,0 @@ - -## Zvbb - Vector Bit-manipulation used in Cryptography: - -### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn (vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn (vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn (vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn (vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn (vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn (vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn (vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn (vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn (vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn (vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn (vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn (vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn (vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn (vbool64_t mask, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn (vbool32_t mask, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn (vbool16_t mask, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn (vbool8_t mask, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn (vbool4_t mask, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn (vbool2_t mask, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn (vbool1_t mask, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn (vbool64_t mask, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn (vbool32_t mask, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn (vbool16_t mask, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn (vbool8_t mask, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn (vbool4_t mask, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn (vbool2_t mask, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn (vbool64_t mask, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn (vbool32_t mask, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn (vbool16_t mask, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn (vbool8_t mask, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn (vbool4_t mask, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vbrev (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8 (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8 (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8 (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8 (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8 (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8 (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8 (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8 (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8 (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8 (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8 (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8 (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8 (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8 (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8 (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8 (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8 (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8 (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8 (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8 (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8 (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8 (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8 (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8 (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8 (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8 (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8 (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8 (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8 (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8 (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8 (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8 (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8 (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8 (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8 (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8 (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8 (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8 (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8 (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8 (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8 (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8 (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8 (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8 (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8 (vbool8_t mask, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Count Bits](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vclz (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz (vbool8_t mask, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz (vbool64_t mask, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz (vbool32_t mask, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz (vbool16_t mask, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz (vbool8_t mask, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz (vbool4_t mask, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz (vbool2_t mask, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz (vbool1_t mask, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz (vbool64_t mask, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz (vbool32_t mask, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz (vbool16_t mask, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz (vbool8_t mask, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz (vbool4_t mask, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz (vbool2_t mask, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz (vbool64_t mask, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz (vbool32_t mask, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz (vbool16_t mask, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz (vbool8_t mask, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz (vbool4_t mask, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz (vbool64_t mask, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz (vbool32_t mask, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz (vbool16_t mask, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz (vbool8_t mask, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Rotate](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol (vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror (vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror (vbool1_t mask, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror (vbool1_t mask, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror (vbool2_t mask, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror (vbool2_t mask, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror (vbool4_t mask, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror (vbool4_t mask, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror (vbool64_t mask, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror (vbool32_t mask, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror (vbool16_t mask, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror (vbool8_t mask, vuint64m8_t vs2, size_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Shift](): - -**Prototypes:** -``` C -vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vbool64_t mask, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t mask, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t mask, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t mask, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t mask, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t mask, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t mask, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t mask, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t mask, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t mask, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t mask, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t mask, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t mask, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t mask, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t mask, vuint32m4_t vs2, size_t rs1, size_t vl); -``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc new file mode 100644 index 000000000..174233382 --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc @@ -0,0 +1,42 @@ + +=== Zvbc - Vector Carryless Multiplication + +[[overloaded-]] +==== Vector Carryless Multiplication + +[,c] +---- +vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md deleted file mode 100644 index df952e521..000000000 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md +++ /dev/null @@ -1,41 +0,0 @@ - -## Zvbc - Vector Carryless Multiplication: - -### [Vector Carryless Multiplication](): - -**Prototypes:** -``` C -vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vbool64_t mask, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vbool64_t mask, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vbool32_t mask, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vbool32_t mask, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vbool16_t mask, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vbool8_t mask, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc similarity index 91% rename from auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md rename to auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc index 0b3bf1254..3b38c6571 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc @@ -1,10 +1,11 @@ -## Zvkg - Vector GCM/GMAC: +=== Zvkg - Vector GCM/GMAC -### [Vector GCM/GMAC](): +[[overloaded-]] +==== Vector GCM/GMAC -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vghsh (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vghsh (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vghsh (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -15,4 +16,4 @@ vuint32m1_t __riscv_vgmul (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vgmul (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc similarity index 91% rename from auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md rename to auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc index b750c129f..407f673d9 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc @@ -1,10 +1,11 @@ -## Zvkned - NIST Suite: Vector AES Block Cipher: +=== Zvkned - NIST Suite: Vector AES Block Cipher -### [Vector AES Encryption](): +[[overloaded-]] +==== Vector AES Encryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesef_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -24,7 +25,6 @@ vuint32m4_t __riscv_vaesef_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -44,13 +44,13 @@ vuint32m4_t __riscv_vaesem_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES Decryption](): +[[overloaded-]] +==== Vector AES Decryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesdf_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -70,7 +70,6 @@ vuint32m4_t __riscv_vaesdf_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -90,13 +89,13 @@ vuint32m4_t __riscv_vaesdm_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES-128 Forward KeySchedule generation](): +[[overloaded-]] +==== Vector AES-128 Forward KeySchedule generation -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaeskf1 (vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vaeskf1 (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf1 (vuint32m2_t vs2, size_t uimm, size_t vl); @@ -107,12 +106,13 @@ vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_ vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector AES round zero](): +[[overloaded-]] +==== Vector AES round zero -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); @@ -127,5 +127,4 @@ vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc similarity index 92% rename from auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md rename to auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc index 2b8a36920..0c818e28d 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc @@ -1,10 +1,11 @@ -## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash -### [Vector SHA-2 message schedule](): +[[overloaded-]] +==== Vector SHA-2 message schedule -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ms (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ms (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ms (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -14,12 +15,13 @@ vuint64m1_t __riscv_vsha2ms (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, s vuint64m2_t __riscv_vsha2ms (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2ms (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2ms (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -### [Vector SHA-2 two rounds of compression](): +[[overloaded-]] +==== Vector SHA-2 two rounds of compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ch (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ch (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ch (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -38,4 +40,4 @@ vuint64m1_t __riscv_vsha2cl (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, s vuint64m2_t __riscv_vsha2cl (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2cl (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc similarity index 88% rename from auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md rename to auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc index 8b9eb1b2a..f5ad8d8fa 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc @@ -1,21 +1,23 @@ -## Zvksed - ShangMi Suite: SM4 Block Cipher: +=== Zvksed - ShangMi Suite: SM4 Block Cipher -### [Vector SM4 KeyExpansion](): +[[overloaded-]] +==== Vector SM4 KeyExpansion -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4k (vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm4k (vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm4k (vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm4k (vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector SM4 Rounds](): +[[overloaded-]] +==== Vector SM4 Rounds -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -35,5 +37,4 @@ vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc similarity index 82% rename from auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md rename to auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc index a904879b0..ddf0b441c 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc @@ -1,24 +1,26 @@ -## Zvksh - ShangMi Suite: SM3 Secure Hash: +=== Zvksh - ShangMi Suite: SM3 Secure Hash -### [Vector SM3 Message Expansion](): +[[overloaded-]] +==== Vector SM3 Message Expansion -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3me (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsm3me (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsm3me (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); vuint32m4_t __riscv_vsm3me (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); vuint32m8_t __riscv_vsm3me (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -``` +---- -### [Vector SM3 Message Expansion](): +[[overloaded-]] +==== Vector SM3 Compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3c (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm3c (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm3c (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm3c (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm3c (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c index 1bc0fc4a4..43eef93e8 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_ vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c index fa4536189..3c1d89651 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_ vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c index d499b8720..1b82fcd8c 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_ vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c index 345b93db5..1db0f1bda 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_ vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c index 97339218d..4bbd0fb10 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c @@ -1,26 +1,22 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32m8_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c index 2451093d1..30150c660 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32mf2_tu(vd, vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c index 57a9822f3..25486191d 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vaesz_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } @@ -59,8 +56,3 @@ vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c index 6cdb97418..786635b20 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c @@ -1,710 +1,706 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c index f4a0371a9..5a16e6adf 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c @@ -1,358 +1,354 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tu(vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_tu(maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tu(vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_tu(maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tu(vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_tu(maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tu(vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_tu(maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_tu(maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_tu(maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_tu(maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_tu(maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_tu(maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_tu(maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_tu(maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_tu(maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c index d9a0c3cc2..6186201fc 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c @@ -1,358 +1,354 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tu(vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_tu(maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tu(vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_tu(maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tu(vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_tu(maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tu(vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_tu(maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_tu(maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_tu(maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_tu(maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_tu(maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_tu(maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_tu(maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_tu(maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_tu(maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c index bc3add0ee..22f2b9b4b 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c @@ -1,134 +1,130 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c index 7ca88e340..a43662a06 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c @@ -1,134 +1,130 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c index 48ec0cb4b..731050d9c 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32mf2_tu(vd, vs2, vs1, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c index 13b28496d..ed035adf4 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vgmul_vv_u32mf2_tu(vd, vs2, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vgmul_vv_u32m8_tu(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c index e5a425b5f..ef1976f3e 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c @@ -1,358 +1,354 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tu(vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_tu(maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tu(vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_tu(maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tu(vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_tu(maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tu(vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_tu(maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_tu(maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_tu(maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_tu(maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_tu(maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_tu(maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_tu(maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_tu(maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_tu(maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c index a023644e3..d630a488c 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c @@ -1,710 +1,706 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c index c94ef3774..f62f3eb6e 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c @@ -1,710 +1,706 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c index 9940b82c2..1d9b85bc0 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_ vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c index 11360869d..468a4d938 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_ vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c index b9e9f83b2..9ee82d425 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl); } @@ -39,4 +36,3 @@ vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_ vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c index b0b2246a3..f420557dc 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl); } @@ -23,4 +20,3 @@ vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m8_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c index 3df7ce142..9b635b0d8 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c @@ -1,26 +1,22 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c index 05dc6da60..270812106 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c @@ -1,26 +1,22 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32m1_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32m2_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32m4_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32m8_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32m8_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c index 996ca813c..4c95663f3 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c @@ -1,9 +1,6 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl); } @@ -79,8 +76,3 @@ vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c index ca8992376..56b35568a 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c @@ -1,486 +1,482 @@ #include #include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index 84544a85a..9e568e634 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -1,1038 +1,1045 @@ -## Zvbb - Vector Bit-manipulation used in Cryptography: +=== Zvbb - Vector Bit-manipulation used in Cryptography -### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): +[[policy-variant-]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not -**Prototypes:** -``` C -vuint8mf8_t __riscv_vandn_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +[,c] +---- +vuint8mf8_t __riscv_vandn_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` +vuint8mf8_t __riscv_vandn_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- -### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Reverse Bits in Elements -**Prototypes:** -``` C -vuint8mf8_t __riscv_vbrev_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +[,c] +---- +vuint8mf8_t __riscv_vbrev_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -``` +vuint8mf8_t __riscv_vbrev_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- -### [Vector Bit-manipulation used in Cryptography - Count Bits](): -This operation don't have Policy Intrinsic Functions. +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Count Bits +Intrinsics here don't have a policy variant. -### [Vector Bit-manipulation used in Cryptography - Rotate](): +[[policy-variant-]] +==== Vector Bit-manipulation used in Cryptography - Rotate -**Prototypes:** -``` C -vuint8mf8_t __riscv_vrol_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +[,c] +---- +vuint8mf8_t __riscv_vrol_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -``` +vuint8mf8_t __riscv_vrol_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +---- -### [Vector Bit-manipulation used in Cryptography - Shift](): +[[policy-variant-]] +==== Vector Basic Bit-manipulation used - Widening Shift -**Prototypes:** -``` C -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +[,c] +---- +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -``` +vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +---- -## Zvbc - Vector Carryless Multiplication: +=== Zvbc - Vector Carryless Multiplication -### [Vector Carryless Multiplication](): +[[policy-variant-]] +==== Vector Carryless Multiplication -**Prototypes:** -``` C -vuint64m1_t __riscv_vclmul_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +[,c] +---- +vuint64m1_t __riscv_vclmul_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` +vuint64m1_t __riscv_vclmul_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- -## Zvkg - Vector GCM/GMAC: +=== Zvkg - Vector GCM/GMAC -### [Vector GCM/GMAC](): +[[policy-variant-]] +==== Vector GCM/GMAC -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vghsh_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vghsh_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vghsh_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -1043,14 +1050,15 @@ vuint32m1_t __riscv_vgmul_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t v vuint32m2_t __riscv_vgmul_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvkned - NIST Suite: Vector AES Block Cipher: +=== Zvkned - NIST Suite: Vector AES Block Cipher -### [Vector AES Encryption](): +[[policy-variant-]] +==== Vector AES Encryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1070,7 +1078,6 @@ vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1090,13 +1097,13 @@ vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES Decryption](): +[[policy-variant-]] +==== Vector AES Decryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1116,7 +1123,6 @@ vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1136,29 +1142,30 @@ vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES-128 Forward KeySchedule generation](): +[[policy-variant-]] +==== Vector AES-128 Forward KeySchedule generation -**Prototypes:** -``` C -vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +[,c] +---- +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector AES round zero](): +[[policy-variant-]] +==== Vector AES round zero -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); @@ -1173,15 +1180,15 @@ vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, si vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash -### [Vector SHA-2 message schedule](): +[[policy-variant-]] +==== Vector SHA-2 message schedule -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ms_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ms_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ms_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -1191,12 +1198,13 @@ vuint64m1_t __riscv_vsha2ms_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint6 vuint64m2_t __riscv_vsha2ms_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2ms_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -### [Vector SHA-2 two rounds of compression](): +[[policy-variant-]] +==== Vector SHA-2 two rounds of compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ch_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ch_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ch_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -1215,25 +1223,27 @@ vuint64m1_t __riscv_vsha2cl_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint6 vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -## Zvksed - ShangMi Suite: SM4 Block Cipher: +=== Zvksed - ShangMi Suite: SM4 Block Cipher -### [Vector SM4 KeyExpansion](): +[[policy-variant-]] +==== Vector SM4 KeyExpansion -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +[,c] +---- +vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +---- -### [Vector SM4 Rounds](): +[[policy-variant-]] +==== Vector SM4 Rounds -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1253,29 +1263,30 @@ vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t v vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvksh - ShangMi Suite: SM3 Secure Hash: +=== Zvksh - ShangMi Suite: SM3 Secure Hash -### [Vector SM3 Message Expansion](): +[[policy-variant-]] +==== Vector SM3 Message Expansion -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -``` +[,c] +---- +vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +---- -### [Vector SM3 Message Expansion](): +[[policy-variant-]] +==== Vector SM3 Compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm3c_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm3c_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm3c_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm3c_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc new file mode 100644 index 000000000..b261c089c --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -0,0 +1,958 @@ + +=== Zvbb - Vector Bit-manipulation used in Cryptography + +[[policy-variant-]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not + +[,c] +---- +vuint8mf8_t __riscv_vandn_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Reverse Bits in Elements + +[,c] +---- +vuint8mf8_t __riscv_vbrev_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Count Bits +Intrinsics here don't have a policy variant. + +[[policy-variant-]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- +vuint8mf8_t __riscv_vrol_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md deleted file mode 100644 index 0031d9a2d..000000000 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ /dev/null @@ -1,953 +0,0 @@ - -## Zvbb - Vector Bit-manipulation used in Cryptography: - -### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vandn_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vbrev_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Count Bits](): -This operation don't have Policy Intrinsic Functions. - -### [Vector Bit-manipulation used in Cryptography - Rotate](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vrol_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Shift](): - -**Prototypes:** -``` C -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc new file mode 100644 index 000000000..559ba54e5 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc @@ -0,0 +1,76 @@ + +=== Zvbc - Vector Carryless Multiplication + +[[policy-variant-]] +==== Vector Carryless Multiplication + +[,c] +---- +vuint64m1_t __riscv_vclmul_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md deleted file mode 100644 index 7e7effc48..000000000 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md +++ /dev/null @@ -1,75 +0,0 @@ - -## Zvbc - Vector Carryless Multiplication: - -### [Vector Carryless Multiplication](): - -**Prototypes:** -``` C -vuint64m1_t __riscv_vclmul_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc similarity index 91% rename from auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md rename to auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc index 0cd0c65e3..cf2c6a401 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc @@ -1,10 +1,11 @@ -## Zvkg - Vector GCM/GMAC: +=== Zvkg - Vector GCM/GMAC -### [Vector GCM/GMAC](): +[[policy-variant-]] +==== Vector GCM/GMAC -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vghsh_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vghsh_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vghsh_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -15,4 +16,4 @@ vuint32m1_t __riscv_vgmul_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t v vuint32m2_t __riscv_vgmul_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc similarity index 87% rename from auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md rename to auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc index 978cdee59..29d2463a1 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc @@ -1,10 +1,11 @@ -## Zvkned - NIST Suite: Vector AES Block Cipher: +=== Zvkned - NIST Suite: Vector AES Block Cipher -### [Vector AES Encryption](): +[[policy-variant-]] +==== Vector AES Encryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -24,7 +25,6 @@ vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -44,13 +44,13 @@ vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES Decryption](): +[[policy-variant-]] +==== Vector AES Decryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -70,7 +70,6 @@ vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -90,29 +89,30 @@ vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES-128 Forward KeySchedule generation](): +[[policy-variant-]] +==== Vector AES-128 Forward KeySchedule generation -**Prototypes:** -``` C -vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +[,c] +---- +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector AES round zero](): +[[policy-variant-]] +==== Vector AES round zero -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); @@ -127,5 +127,4 @@ vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, si vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc similarity index 93% rename from auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md rename to auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc index c6a2a611f..2aec4fd51 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc @@ -1,10 +1,11 @@ -## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash -### [Vector SHA-2 message schedule](): +[[policy-variant-]] +==== Vector SHA-2 message schedule -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ms_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ms_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ms_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -14,12 +15,13 @@ vuint64m1_t __riscv_vsha2ms_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint6 vuint64m2_t __riscv_vsha2ms_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2ms_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -### [Vector SHA-2 two rounds of compression](): +[[policy-variant-]] +==== Vector SHA-2 two rounds of compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ch_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ch_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ch_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -38,4 +40,4 @@ vuint64m1_t __riscv_vsha2cl_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint6 vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc similarity index 68% rename from auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md rename to auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc index 49419391a..95d0f470f 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc @@ -1,21 +1,23 @@ -## Zvksed - ShangMi Suite: SM4 Block Cipher: +=== Zvksed - ShangMi Suite: SM4 Block Cipher -### [Vector SM4 KeyExpansion](): +[[policy-variant-]] +==== Vector SM4 KeyExpansion -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_vi_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_vi_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_vi_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +[,c] +---- +vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +---- -### [Vector SM4 Rounds](): +[[policy-variant-]] +==== Vector SM4 Rounds -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -35,5 +37,4 @@ vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t v vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m8_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc new file mode 100644 index 000000000..589216717 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc @@ -0,0 +1,26 @@ + +=== Zvksh - ShangMi Suite: SM3 Secure Hash + +[[policy-variant-]] +==== Vector SM3 Message Expansion + +[,c] +---- +vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +---- + +[[policy-variant-]] +==== Vector SM3 Compression + +[,c] +---- +vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md deleted file mode 100644 index afc57afff..000000000 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md +++ /dev/null @@ -1,24 +0,0 @@ - -## Zvksh - ShangMi Suite: SM3 Secure Hash: - -### [Vector SM3 Message Expansion](): - -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_vv_u32m1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_vv_u32m2_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_vv_u32m4_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -``` - -### [Vector SM3 Message Expansion](): - -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c index 7bf1af06b..990433721 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -87,8 +87,3 @@ vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_ vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c index 856cc6350..80a243721 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -87,8 +87,3 @@ vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_ vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c index a7ee09719..224ac4953 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -87,8 +87,3 @@ vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_ vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c index 3b398d5cc..fa0a10105 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -87,8 +87,3 @@ vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_ vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c index 3a70a8170..cc4667e80 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c @@ -1,34 +1,33 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32m1_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32m2_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32m4_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf1_vi_u32m8_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vaeskf1_vi_u32m8_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c index 583d8cf43..7f05b473c 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -31,4 +31,3 @@ vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c index d5e3ba40f..f50cae600 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -67,8 +67,3 @@ vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c index e9f99c85f..8e79acfdd 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c @@ -1,718 +1,717 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c index fb68a2faf..1faa2260e 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c @@ -1,366 +1,365 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tu(vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_tu(maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tu(vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_tu(maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tu(vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_tu(maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tu(vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_tu(maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_tu(maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_tu(maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_tu(maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_tu(maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_tu(maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_tu(maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_tu(maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_tu(maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf4_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8mf2_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m1_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m2_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m4_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u8m8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf4_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16mf2_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m1_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m2_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m4_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u16m8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32mf2_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m1_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m2_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m4_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u32m8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m1_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m2_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m4_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_v_u64m8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_v_u64m8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c index ad555f360..737992ff9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c @@ -1,366 +1,365 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tu(vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_tu(maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tu(vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_tu(maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tu(vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_tu(maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tu(vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_tu(maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_tu(maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_tu(maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_tu(maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_tu(maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_tu(maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_tu(maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_tu(maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_tu(maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m1_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m2_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m4_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u8m8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m1_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m2_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m4_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u16m8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m1_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m2_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m4_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u32m8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m1_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m2_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m4_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_v_u64m8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_v_u64m8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c index 0f6ab5547..c776dacad 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c @@ -1,142 +1,141 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c index 0c9384bef..94df486ca 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c @@ -1,142 +1,141 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c index a503594d6..e3f7395a9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -31,4 +31,3 @@ vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c index aec4008fb..e4920e5d1 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -31,4 +31,3 @@ vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vgmul_vv_u32m8_tu(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c index 552b08f8e..61471ea81 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c @@ -1,366 +1,365 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tu(vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_tu(maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tu(vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_tu(maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tu(vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_tu(maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tu(vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_tu(maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_tu(maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_tu(maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_tu(maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_tu(maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_tu(maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_tu(maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_tu(maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_tu(maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf4_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8mf2_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m1_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m2_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m4_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u8m8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf4_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16mf2_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m1_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m2_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m4_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u16m8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32mf2_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m1_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m2_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m4_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u32m8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m1_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m2_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m4_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_v_u64m8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_v_u64m8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c index cea862e0f..0dacd5b3e 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c @@ -1,718 +1,717 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c index 62ced63e1..c28fb02ee 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c @@ -1,718 +1,717 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u8mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_vv_u8m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vror_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_vv_u8m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vror_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_vv_u8m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vror_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_vv_u8m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { + return __riscv_vror_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u8m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vror_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vror_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vror_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { + return __riscv_vror_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vror_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vror_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vror_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vror_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { + return __riscv_vror_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { + return __riscv_vror_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { + return __riscv_vror_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { + return __riscv_vror_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c index 7b0921a22..97c413c75 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -47,4 +47,3 @@ vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_ vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c index 8920a97a6..8f43c4416 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -47,4 +47,3 @@ vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_ vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c index 9e7df01ff..bb48799a5 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -47,4 +47,3 @@ vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_ vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c index 0cda20a97..ccf8caa8b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -31,4 +31,3 @@ vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_vi_u32m8_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c index a3687efb9..3ebf605aa 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c @@ -1,34 +1,33 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vsm3me_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_vv_u32m8_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c index 9c03f0061..8f353c311 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c @@ -1,34 +1,33 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32mf2_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32m1_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32m2_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32m4_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4k_vi_u32m8_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4k_vi_u32m8_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c index 749e2f687..06f9b3ffc 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c @@ -1,12 +1,12 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s @@ -87,8 +87,3 @@ vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m8_tu(vd, vs2, vl); } - -vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_u32m8_u32m8_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c index 390decdcb..63da91ed1 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c @@ -1,494 +1,493 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u16m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u16m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32mf2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32mf2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u32m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u32m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m1_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m1_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m2_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m2_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m4_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m4_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_vv_u64m8_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_vx_u64m8_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c index 33cff9e27..6d2504f27 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c @@ -1,94 +1,108 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c index eb5e53de7..1fa488b00 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c @@ -1,94 +1,108 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c index 00327588f..5635721bb 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c @@ -1,94 +1,108 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c index fc86d2d5c..a7f05f2b8 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c @@ -1,94 +1,108 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c index 8ffb3eaf4..c3f94c976 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c @@ -1,34 +1,38 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c index 3da580d32..2df41ac05 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c @@ -1,34 +1,38 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c index 352ea15d6..877402aee 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c @@ -1,74 +1,83 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesz_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c index de8cd112d..af084405c 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c @@ -1,718 +1,950 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c index f8c14b057..a9c542556 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c @@ -1,366 +1,434 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c index 83967d94f..a986b5ece 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c @@ -1,366 +1,434 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c index e20f8bff1..11f24f0b9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c @@ -1,142 +1,189 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c index 87cb1377d..f9a4a8af7 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c @@ -1,142 +1,193 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c index 526c3a33d..5a1670759 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c @@ -1,34 +1,38 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c index c5ba6e721..995625243 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vgmul_tu(vd, vs2, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vgmul_tu(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c index a723aa6de..62c1e3e1e 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c @@ -1,366 +1,434 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c index aa8ba847d..6617d9830 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c @@ -1,718 +1,926 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c index e3545be3c..0fb6a2d3f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c @@ -1,718 +1,926 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c index d7e8bf814..e61e23e6d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c index 0dc6ff651..5ca7969f5 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c index b2193b03d..ef3478429 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c @@ -1,50 +1,58 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c index f32bff343..3bc96a360 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c @@ -1,18 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } @@ -31,4 +32,3 @@ vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c index 657a2aed2..2fd5ab2ed 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c @@ -1,34 +1,38 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c index f241b53f5..acf15ab27 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c @@ -1,34 +1,34 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c index 8d08ad373..e8ba1fd59 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c @@ -1,38 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -76,19 +89,16 @@ vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c index c6772c8c7..49b8ee5a0 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c @@ -1,494 +1,650 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +experimental-zvbb \ -// RUN: -target-feature +experimental-zvbc \ -// RUN: -target-feature +experimental-zvkg \ -// RUN: -target-feature +experimental-zvkned \ -// RUN: -target-feature +experimental-zvknhb \ -// RUN: -target-feature +experimental-zvksed \ -// RUN: -target-feature +experimental-zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c index 7e2d582b1..1c1f98128 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdf.c @@ -1,86 +1,97 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesdf_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesdf_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdf_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c index 191885804..2eb7f3517 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesdm.c @@ -1,86 +1,97 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesdm_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesdm_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesdm_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c index 2230a962c..bd17e9ddc 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesef.c @@ -1,86 +1,97 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesef_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesef_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesef_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c index f0fff627e..fdbb66b41 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesem.c @@ -1,86 +1,97 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesem_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vaesem_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vaesem_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c index f531dd6af..8f11194f5 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf1.c @@ -1,26 +1,27 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vaeskf1_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vaeskf1_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c index ef989068e..2f71a4d13 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaeskf2.c @@ -1,26 +1,27 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaeskf2_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c index b4364cdd2..6687ccb2c 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vaesz.c @@ -1,66 +1,72 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { - return __riscv_vaesz_tu(vd, vs2, vl); -} - -vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_tu(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vandn.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vandn.c index 1ecbcdfa9..73315e18a 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vandn.c @@ -1,710 +1,939 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vandn_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vandn_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev.c index c0c9eb726..b46e2114c 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev.c @@ -1,358 +1,423 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_tu(maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vbrev_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vbrev_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev8.c index c375826e5..03f632695 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vbrev8.c @@ -1,358 +1,423 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vbrev8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vbrev8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vbrev8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmul.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmul.c index 3fe950acd..488ab2300 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmul.c @@ -1,134 +1,178 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmul_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmul_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmul_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmul_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmulh.c index cb04c9935..06a287746 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclmulh.c @@ -1,134 +1,182 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { + return __riscv_vclmulh_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { + return __riscv_vclmulh_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl) { - return __riscv_vclmulh_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { + return __riscv_vclmulh_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vghsh.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vghsh.c index eeb1718a4..a346e788f 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vghsh.c @@ -1,26 +1,27 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vgmul.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vgmul.c index a50b7e4a9..fc282ac60 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vgmul.c @@ -1,10 +1,8 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vgmul_tu(vd, vs2, vl); } @@ -23,4 +21,3 @@ vuint32m4_t test_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vgmul_tu(vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrev8.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrev8.c index 55f6bf42e..d56e3555e 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrev8.c @@ -1,358 +1,423 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_tu(maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tu(vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_tum(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vrev8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_tumu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vrev8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl) { - return __riscv_vrev8_mu(mask, maskedoff, vs2, vl); +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vrev8_mu(vm, vd, vs2, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrol.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrol.c index 8b7154ede..0f9405dcd 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vrol.c @@ -1,710 +1,915 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vrol_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vrol_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vrol_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vror.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vror.c index b2856896f..6f97b5a85 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vror.c @@ -1,710 +1,915 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { + return __riscv_vror_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl) { - return __riscv_vror_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vror_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ch.c index cf1afc07f..eb2435d9c 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ch.c @@ -1,42 +1,47 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2cl.c index a385bfd49..f657a7901 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2cl.c @@ -1,42 +1,47 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ms.c index ae5e74fff..349f16c5b 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsha2ms.c @@ -1,42 +1,47 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c index b784b6537..1778de96a 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3c.c @@ -1,10 +1,8 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } @@ -23,4 +21,3 @@ vuint32m4_t test_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { vuint32m8_t test_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm3c_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3me.c index 46ddc2d66..f4536867d 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm3me.c @@ -1,26 +1,27 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { - return __riscv_vsm3me_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { + return __riscv_vsm3me_tu(vd, vs2, vs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c index e1f938477..ac789ac44 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4k.c @@ -1,26 +1,23 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32m1_t test_vsm4k_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32m2_t test_vsm4k_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32m4_t test_vsm4k_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4k_tu(maskedoff, vs2, 0, vl); +vuint32m8_t test_vsm4k_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vsm4k_tu(vd, vs2, 0, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c index bf509ae52..46cf176d3 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vsm4r.c @@ -1,30 +1,33 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -32,19 +35,23 @@ vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -52,15 +59,18 @@ vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } @@ -68,19 +78,16 @@ vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_tu(vd, vs2, vl); } vuint32m8_t test_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { return __riscv_vsm4r_vv_tu(vd, vs2, vl); } - -vuint32m8_t test_vsm4r_vs_u32m8_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { - return __riscv_vsm4r_vs_tu(vd, vs2, vl); -} - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c index e316013f9..e3736d299 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vwsll.c @@ -1,486 +1,639 @@ -#include #include +#include -typedef _Float16 float16_t; -typedef float float32_t; -typedef double float64_t; -vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tu(maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tum(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_tumu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, vs1, vl); +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl) { - return __riscv_vwsll_mu(mask, maskedoff, vs2, rs1, vl); +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { + return __riscv_vwsll_mu(vm, vd, vs2, rs1, vl); } - diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index a1769992b..bb908fb39 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -1,1038 +1,1045 @@ -## Zvbb - Vector Bit-manipulation used in Cryptography: +=== Zvbb - Vector Bit-manipulation used in Cryptography -### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): +[[policy-variant-overloaded]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not -**Prototypes:** -``` C -vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +[,c] +---- +vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tu (vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tu (vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tu (vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tu (vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tu (vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tu (vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tu (vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tu (vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tu (vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tu (vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tu (vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tu (vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` +vuint8mf8_t __riscv_vandn_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- -### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Reverse Bits in Elements -**Prototypes:** -``` C -vuint8mf8_t __riscv_vbrev_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +[,c] +---- +vuint8mf8_t __riscv_vbrev_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -``` +vuint8mf8_t __riscv_vbrev_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- -### [Vector Bit-manipulation used in Cryptography - Count Bits](): -This operation don't have Policy Intrinsic Functions. +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Count Bits +Intrinsics here don't have a policy variant. -### [Vector Bit-manipulation used in Cryptography - Rotate](): +[[policy-variant-overloaded]] +==== Vector Bit-manipulation used in Cryptography - Rotate -**Prototypes:** -``` C -vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +[,c] +---- +vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -``` +vuint8mf8_t __riscv_vrol_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +---- -### [Vector Bit-manipulation used in Cryptography - Shift](): +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation used - Widening Shift -**Prototypes:** -``` C -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +[,c] +---- +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -``` +vuint16mf4_t __riscv_vwsll_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +---- -## Zvbc - Vector Carryless Multiplication: +=== Zvbc - Vector Carryless Multiplication -### [Vector Carryless Multiplication](): +[[policy-variant-overloaded]] +==== Vector Carryless Multiplication -**Prototypes:** -``` C -vuint64m1_t __riscv_vclmul_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +[,c] +---- +vuint64m1_t __riscv_vclmul_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` +vuint64m1_t __riscv_vclmul_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- -## Zvkg - Vector GCM/GMAC: +=== Zvkg - Vector GCM/GMAC -### [Vector GCM/GMAC](): +[[policy-variant-overloaded]] +==== Vector GCM/GMAC -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vghsh_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vghsh_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vghsh_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -1043,14 +1050,15 @@ vuint32m1_t __riscv_vgmul_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vgmul_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvkned - NIST Suite: Vector AES Block Cipher: +=== Zvkned - NIST Suite: Vector AES Block Cipher -### [Vector AES Encryption](): +[[policy-variant-overloaded]] +==== Vector AES Encryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesef_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1070,7 +1078,6 @@ vuint32m4_t __riscv_vaesef_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1090,13 +1097,13 @@ vuint32m4_t __riscv_vaesem_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES Decryption](): +[[policy-variant-overloaded]] +==== Vector AES Decryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesdf_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1116,7 +1123,6 @@ vuint32m4_t __riscv_vaesdf_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1136,29 +1142,30 @@ vuint32m4_t __riscv_vaesdm_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES-128 Forward KeySchedule generation](): +[[policy-variant-overloaded]] +==== Vector AES-128 Forward KeySchedule generation -**Prototypes:** -``` C -vuint32mf2_t __riscv_vaeskf1_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +[,c] +---- +vuint32mf2_t __riscv_vaeskf1_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector AES round zero](): +[[policy-variant-overloaded]] +==== Vector AES round zero -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); @@ -1173,15 +1180,15 @@ vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash -### [Vector SHA-2 message schedule](): +[[policy-variant-overloaded]] +==== Vector SHA-2 message schedule -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ms_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ms_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ms_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -1191,12 +1198,13 @@ vuint64m1_t __riscv_vsha2ms_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1 vuint64m2_t __riscv_vsha2ms_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2ms_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2ms_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -### [Vector SHA-2 two rounds of compression](): +[[policy-variant-overloaded]] +==== Vector SHA-2 two rounds of compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ch_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ch_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ch_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -1215,25 +1223,27 @@ vuint64m1_t __riscv_vsha2cl_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1 vuint64m2_t __riscv_vsha2cl_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2cl_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -## Zvksed - ShangMi Suite: SM4 Block Cipher: +=== Zvksed - ShangMi Suite: SM4 Block Cipher -### [Vector SM4 KeyExpansion](): +[[policy-variant-overloaded]] +==== Vector SM4 KeyExpansion -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm4k_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +[,c] +---- +vuint32mf2_t __riscv_vsm4k_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +---- -### [Vector SM4 Rounds](): +[[policy-variant-overloaded]] +==== Vector SM4 Rounds -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -1253,29 +1263,30 @@ vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -## Zvksh - ShangMi Suite: SM3 Secure Hash: +=== Zvksh - ShangMi Suite: SM3 Secure Hash -### [Vector SM3 Message Expansion](): +[[policy-variant-overloaded]] +==== Vector SM3 Message Expansion -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm3me_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -``` +[,c] +---- +vuint32mf2_t __riscv_vsm3me_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +---- -### [Vector SM3 Message Expansion](): +[[policy-variant-overloaded]] +==== Vector SM3 Compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm3c_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vsm3c_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vsm3c_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vsm3c_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vsm3c_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc new file mode 100644 index 000000000..1ad1d2345 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -0,0 +1,958 @@ + +=== Zvbb - Vector Bit-manipulation used in Cryptography + +[[policy-variant-overloaded]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not + +[,c] +---- +vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tu (vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tu (vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tu (vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tu (vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tu (vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tu (vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tu (vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tu (vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tu (vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tu (vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tu (vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tu (vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Reverse Bits in Elements + +[,c] +---- +vuint8mf8_t __riscv_vbrev_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Count Bits +Intrinsics here don't have a policy variant. + +[[policy-variant-overloaded]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- +vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tu (vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tu (vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tu (vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tu (vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tu (vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tu (vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tu (vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tu (vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tu (vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tu (vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tu (vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tu (vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md deleted file mode 100644 index 4bcf7ffbd..000000000 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.md +++ /dev/null @@ -1,953 +0,0 @@ - -## Zvbb - Vector Bit-manipulation used in Cryptography: - -### [Vector Bit-manipulation used in Cryptography - Bitwise And-Not](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Reverse Bits](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vbrev_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Count Bits](): -This operation don't have Policy Intrinsic Functions. - -### [Vector Bit-manipulation used in Cryptography - Rotate](): - -**Prototypes:** -``` C -vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tu (vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tu (vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tu (vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tu (vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tu (vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tu (vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tu (vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tu (vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tu (vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tu (vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tu (vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tu (vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tu (vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tum (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tum (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tum (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tum (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tum (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tum (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tum (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tumu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tumu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tumu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tumu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tumu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tumu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tumu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_mu (vbool64_t mask, vuint8mf8_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_mu (vbool32_t mask, vuint8mf4_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_mu (vbool16_t mask, vuint8mf2_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_mu (vbool8_t mask, vuint8m1_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_mu (vbool4_t mask, vuint8m2_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_mu (vbool2_t mask, vuint8m4_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_mu (vbool1_t mask, vuint8m8_t maskedoff, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, size_t rs1, size_t vl); -``` - -### [Vector Bit-manipulation used in Cryptography - Shift](): - -**Prototypes:** -``` C -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tum (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_mu (vbool64_t mask, vuint16mf4_t maskedoff, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t mask, vuint16mf2_t maskedoff, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t mask, vuint16m1_t maskedoff, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t mask, vuint16m2_t maskedoff, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t mask, vuint16m4_t maskedoff, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t mask, vuint16m8_t maskedoff, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t mask, vuint32mf2_t maskedoff, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t mask, vuint32m1_t maskedoff, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t mask, vuint32m2_t maskedoff, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t mask, vuint32m4_t maskedoff, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t mask, vuint32m8_t maskedoff, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint32m4_t vs2, size_t rs1, size_t vl); -``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc new file mode 100644 index 000000000..98ab2a820 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc @@ -0,0 +1,76 @@ + +=== Zvbc - Vector Carryless Multiplication + +[[policy-variant-overloaded]] +==== Vector Carryless Multiplication + +[,c] +---- +vuint64m1_t __riscv_vclmul_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md deleted file mode 100644 index 6d12267b2..000000000 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.md +++ /dev/null @@ -1,75 +0,0 @@ - -## Zvbc - Vector Carryless Multiplication: - -### [Vector Carryless Multiplication](): - -**Prototypes:** -``` C -vuint64m1_t __riscv_vclmul_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tum (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tum (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tum (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tum (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tumu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tumu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tumu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tumu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_mu (vbool64_t mask, vuint64m1_t maskedoff, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_mu (vbool32_t mask, vuint64m2_t maskedoff, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_mu (vbool16_t mask, vuint64m4_t maskedoff, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_mu (vbool8_t mask, vuint64m8_t maskedoff, vuint64m8_t vs2, uint64_t rs1, size_t vl); -``` diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc similarity index 90% rename from auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md rename to auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc index 0f44b8ea2..36e253baf 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc @@ -1,10 +1,11 @@ -## Zvkg - Vector GCM/GMAC: +=== Zvkg - Vector GCM/GMAC -### [Vector GCM/GMAC](): +[[policy-variant-overloaded]] +==== Vector GCM/GMAC -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vghsh_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vghsh_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vghsh_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -15,4 +16,4 @@ vuint32m1_t __riscv_vgmul_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); vuint32m2_t __riscv_vgmul_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vgmul_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc similarity index 86% rename from auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md rename to auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc index 6b16a5a48..46b66b36f 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc @@ -1,10 +1,11 @@ -## Zvkned - NIST Suite: Vector AES Block Cipher: +=== Zvkned - NIST Suite: Vector AES Block Cipher -### [Vector AES Encryption](): +[[policy-variant-overloaded]] +==== Vector AES Encryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesef_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesef_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -24,7 +25,6 @@ vuint32m4_t __riscv_vaesef_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesef_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesem_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -44,13 +44,13 @@ vuint32m4_t __riscv_vaesem_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesem_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES Decryption](): +[[policy-variant-overloaded]] +==== Vector AES Decryption -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesdf_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdf_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -70,7 +70,6 @@ vuint32m4_t __riscv_vaesdf_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdf_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vaesdm_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -90,29 +89,30 @@ vuint32m4_t __riscv_vaesdm_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesdm_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- -### [Vector AES-128 Forward KeySchedule generation](): +[[policy-variant-overloaded]] +==== Vector AES-128 Forward KeySchedule generation -**Prototypes:** -``` C -vuint32mf2_t __riscv_vaeskf1_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); +[,c] +---- +vuint32mf2_t __riscv_vaeskf1_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +---- -### [Vector AES round zero](): +[[policy-variant-overloaded]] +==== Vector AES round zero -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); @@ -127,5 +127,4 @@ vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc similarity index 92% rename from auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md rename to auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc index 7f060208e..118223db5 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc @@ -1,10 +1,11 @@ -## Zvknh - NIST Suite: Vector SHA-2 Secure Hash: +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash -### [Vector SHA-2 message schedule](): +[[policy-variant-overloaded]] +==== Vector SHA-2 message schedule -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ms_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ms_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ms_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -14,12 +15,13 @@ vuint64m1_t __riscv_vsha2ms_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1 vuint64m2_t __riscv_vsha2ms_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2ms_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2ms_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- -### [Vector SHA-2 two rounds of compression](): +[[policy-variant-overloaded]] +==== Vector SHA-2 two rounds of compression -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsha2ch_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); vuint32m1_t __riscv_vsha2ch_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); vuint32m2_t __riscv_vsha2ch_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); @@ -38,4 +40,4 @@ vuint64m1_t __riscv_vsha2cl_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1 vuint64m2_t __riscv_vsha2cl_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); vuint64m4_t __riscv_vsha2cl_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); vuint64m8_t __riscv_vsha2cl_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc similarity index 67% rename from auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md rename to auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc index 3129cb528..304925935 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc @@ -1,21 +1,23 @@ -## Zvksed - ShangMi Suite: SM4 Block Cipher: +=== Zvksed - ShangMi Suite: SM4 Block Cipher -### [Vector SM4 KeyExpansion](): +[[policy-variant-overloaded]] +==== Vector SM4 KeyExpansion -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm4k_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, size_t uimm, size_t vl); -``` +[,c] +---- +vuint32mf2_t __riscv_vsm4k_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +---- -### [Vector SM4 Rounds](): +[[policy-variant-overloaded]] +==== Vector SM4 Rounds -**Prototypes:** -``` C +[,c] +---- vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); @@ -35,5 +37,4 @@ vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -``` +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc new file mode 100644 index 000000000..b907f2879 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc @@ -0,0 +1,26 @@ + +=== Zvksh - ShangMi Suite: SM3 Secure Hash + +[[policy-variant-overloaded]] +==== Vector SM3 Message Expansion + +[,c] +---- +vuint32mf2_t __riscv_vsm3me_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector SM3 Compression + +[,c] +---- +vuint32mf2_t __riscv_vsm3c_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md deleted file mode 100644 index cb93f408d..000000000 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.md +++ /dev/null @@ -1,24 +0,0 @@ - -## Zvksh - ShangMi Suite: SM3 Secure Hash: - -### [Vector SM3 Message Expansion](): - -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm3me_tu (vuint32mf2_t maskedoff, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_tu (vuint32m1_t maskedoff, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_tu (vuint32m2_t maskedoff, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_tu (vuint32m4_t maskedoff, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t maskedoff, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -``` - -### [Vector SM3 Message Expansion](): - -**Prototypes:** -``` C -vuint32mf2_t __riscv_vsm3c_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -``` From 71582dca04762f3e24e6fc52c50dc84804884b03 Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Tue, 21 May 2024 09:28:04 -0700 Subject: [PATCH 082/151] Change the description of bit/byte reverse --- auto-generated/vector-crypto/intrinsic_funcs.md | 2 +- .../00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc | 2 +- auto-generated/vector-crypto/overloaded_intrinsic_funcs.md | 2 +- .../00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc | 2 +- auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md | 2 +- .../00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc | 2 +- .../vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md | 2 +- .../00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc | 2 +- rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py | 2 +- 9 files changed, 9 insertions(+), 9 deletions(-) diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 4b6c01fc4..993aa0ff3 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -98,7 +98,7 @@ vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1 ---- [[]] -==== Vector Basic Bit-manipulation - Reverse Bits in Elements +==== Vector Basic Bit-manipulation - Reverse [,c] ---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index 25b9c4d67..e5ac8dce7 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -98,7 +98,7 @@ vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1 ---- [[]] -==== Vector Basic Bit-manipulation - Reverse Bits in Elements +==== Vector Basic Bit-manipulation - Reverse [,c] ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 63ef95b6a..00ace0fa3 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -98,7 +98,7 @@ vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl ---- [[overloaded-]] -==== Vector Basic Bit-manipulation - Reverse Bits in Elements +==== Vector Basic Bit-manipulation - Reverse [,c] ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index 15ebc0022..44331cd43 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -98,7 +98,7 @@ vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl ---- [[overloaded-]] -==== Vector Basic Bit-manipulation - Reverse Bits in Elements +==== Vector Basic Bit-manipulation - Reverse [,c] ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index 9e568e634..80603a138 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -188,7 +188,7 @@ vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t ---- [[policy-variant-]] -==== Vector Basic Bit-manipulation - Reverse Bits in Elements +==== Vector Basic Bit-manipulation - Reverse [,c] ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index b261c089c..b44053593 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -188,7 +188,7 @@ vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t ---- [[policy-variant-]] -==== Vector Basic Bit-manipulation - Reverse Bits in Elements +==== Vector Basic Bit-manipulation - Reverse [,c] ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index bb908fb39..559a5b80c 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -188,7 +188,7 @@ vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint ---- [[policy-variant-overloaded]] -==== Vector Basic Bit-manipulation - Reverse Bits in Elements +==== Vector Basic Bit-manipulation - Reverse [,c] ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index 1ad1d2345..eef98e779 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -188,7 +188,7 @@ vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint ---- [[policy-variant-overloaded]] -==== Vector Basic Bit-manipulation - Reverse Bits in Elements +==== Vector Basic Bit-manipulation - Reverse [,c] ---- diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index 210209181..1877e46b9 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -24,7 +24,7 @@ def gen(g): g.function_group( vector_crypto_template, - "Vector Basic Bit-manipulation - Reverse Bits in Elements", + "Vector Basic Bit-manipulation - Reverse", "", # FIXME: We probably have a separate document for vector-crypto ["vbrev", "vbrev8", "vrev8"], UITYPE, From a1769f81198d08bd24850882449d1e3660b2545a Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Mon, 3 Jun 2024 22:56:20 -0700 Subject: [PATCH 083/151] Add missing vcpop, vclz, vctz in api tests --- .../vector-crypto/api-testing/vcpop.c | 178 +++++++ .../vector-crypto/intrinsic_funcs.md | 52 +++ ...bit-manipulation_used_in_cryptography.adoc | 52 +++ .../vector-crypto/llvm-api-tests/vcpop.c | 182 ++++++++ .../llvm-overloaded-tests/vcpop.c | 182 ++++++++ .../overloaded-api-testing/vcpop.c | 178 +++++++ .../overloaded_intrinsic_funcs.md | 52 +++ ...bit-manipulation_used_in_cryptography.adoc | 52 +++ .../policy_funcs/api-testing/vclz.c | 354 ++++++++++++++ .../policy_funcs/api-testing/vcpop.c | 354 ++++++++++++++ .../policy_funcs/api-testing/vctz.c | 354 ++++++++++++++ .../policy_funcs/intrinsic_funcs.md | 282 +++++++++++- ...bit-manipulation_used_in_cryptography.adoc | 282 +++++++++++- .../policy_funcs/llvm-api-tests/vclz.c | 365 +++++++++++++++ .../policy_funcs/llvm-api-tests/vcpop.c | 358 +++++++++++++++ .../policy_funcs/llvm-api-tests/vctz.c | 365 +++++++++++++++ .../policy_funcs/llvm-overloaded-tests/vclz.c | 434 ++++++++++++++++++ .../llvm-overloaded-tests/vcpop.c | 427 +++++++++++++++++ .../policy_funcs/llvm-overloaded-tests/vctz.c | 434 ++++++++++++++++++ .../overloaded-api-testing/vclz.c | 423 +++++++++++++++++ .../overloaded-api-testing/vcpop.c | 423 +++++++++++++++++ .../overloaded-api-testing/vctz.c | 423 +++++++++++++++++ .../overloaded_intrinsic_funcs.md | 282 +++++++++++- ...bit-manipulation_used_in_cryptography.adoc | 282 +++++++++++- .../rvv_intrinsic_gen/vector_crypto_inst.py | 12 +- 25 files changed, 6777 insertions(+), 5 deletions(-) create mode 100644 auto-generated/vector-crypto/api-testing/vcpop.c create mode 100644 auto-generated/vector-crypto/llvm-api-tests/vcpop.c create mode 100644 auto-generated/vector-crypto/llvm-overloaded-tests/vcpop.c create mode 100644 auto-generated/vector-crypto/overloaded-api-testing/vcpop.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vclz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vcpop.c create mode 100644 auto-generated/vector-crypto/policy_funcs/api-testing/vctz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vcpop.c create mode 100644 auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vctz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclz.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vcpop.c create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vctz.c diff --git a/auto-generated/vector-crypto/api-testing/vcpop.c b/auto-generated/vector-crypto/api-testing/vcpop.c new file mode 100644 index 000000000..d3c52d8fd --- /dev/null +++ b/auto-generated/vector-crypto/api-testing/vcpop.c @@ -0,0 +1,178 @@ +#include +#include + +vuint8mf8_t test_vcpop_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_m(vm, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_m(vm, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_m(vm, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_m(vm, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_m(vm, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_m(vm, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_m(vm, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_m(vm, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_m(vm, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_m(vm, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_m(vm, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_m(vm, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_m(vm, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_m(vm, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_m(vm, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_m(vm, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_m(vm, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_m(vm, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_m(vm, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_m(vm, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_m(vm, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_m(vm, vs2, vl); +} diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md index 993aa0ff3..5e8c4df54 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/intrinsic_funcs.md @@ -333,6 +333,58 @@ vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- +[[]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + [[]] ==== Vector Bit-manipulation used in Cryptography - Rotate diff --git a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index e5ac8dce7..be1bbf32e 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -333,6 +333,58 @@ vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- +[[]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_v_u8mf8 (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4 (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2 (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1 (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2 (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4 (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8 (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4 (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2 (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1 (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2 (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4 (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8 (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2 (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1 (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2 (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4 (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8 (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1 (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2 (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4 (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8 (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + [[]] ==== Vector Bit-manipulation used in Cryptography - Rotate diff --git a/auto-generated/vector-crypto/llvm-api-tests/vcpop.c b/auto-generated/vector-crypto/llvm-api-tests/vcpop.c new file mode 100644 index 000000000..1061c2222 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-api-tests/vcpop.c @@ -0,0 +1,182 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vcpop_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8(vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4(vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2(vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1(vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2(vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4(vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8(vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4(vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2(vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1(vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2(vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4(vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8(vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2(vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1(vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2(vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4(vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8(vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1(vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2(vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4(vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8(vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_m(vm, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_m(vm, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_m(vm, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_m(vm, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_m(vm, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_m(vm, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_m(vm, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_m(vm, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_m(vm, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_m(vm, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_m(vm, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_m(vm, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_m(vm, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_m(vm, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_m(vm, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_m(vm, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_m(vm, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_m(vm, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_m(vm, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_m(vm, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_m(vm, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_m(vm, vs2, vl); +} diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vcpop.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vcpop.c new file mode 100644 index 000000000..aee4aff80 --- /dev/null +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vcpop.c @@ -0,0 +1,182 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vcpop_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vcpop.c b/auto-generated/vector-crypto/overloaded-api-testing/vcpop.c new file mode 100644 index 000000000..cf5ec1edd --- /dev/null +++ b/auto-generated/vector-crypto/overloaded-api-testing/vcpop.c @@ -0,0 +1,178 @@ +#include +#include + +vuint8mf8_t test_vcpop_v_u8mf8(vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4(vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2(vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1(vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2(vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4(vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8(vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4(vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2(vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1(vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2(vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4(vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8(vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2(vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1(vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2(vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4(vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8(vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1(vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2(vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4(vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8(vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop(vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop(vm, vs2, vl); +} diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md index 00ace0fa3..fe4429338 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md @@ -333,6 +333,58 @@ vuint64m4_t __riscv_vctz (vbool16_t vm, vuint64m4_t vs2, size_t vl); vuint64m8_t __riscv_vctz (vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- +[[overloaded-]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + [[overloaded-]] ==== Vector Bit-manipulation used in Cryptography - Rotate diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index 44331cd43..c32b967ed 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -333,6 +333,58 @@ vuint64m4_t __riscv_vctz (vbool16_t vm, vuint64m4_t vs2, size_t vl); vuint64m8_t __riscv_vctz (vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- +[[overloaded-]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop (vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop (vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop (vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop (vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop (vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop (vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop (vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop (vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop (vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop (vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop (vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop (vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop (vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop (vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop (vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop (vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop (vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop (vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop (vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop (vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop (vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop (vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop (vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop (vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop (vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop (vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop (vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop (vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop (vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop (vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop (vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop (vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop (vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop (vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop (vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop (vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop (vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop (vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop (vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop (vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop (vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop (vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop (vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop (vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + [[overloaded-]] ==== Vector Bit-manipulation used in Cryptography - Rotate diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vclz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vclz.c new file mode 100644 index 000000000..6e3e1120f --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vclz.c @@ -0,0 +1,354 @@ +#include +#include + +vuint8mf8_t test_vclz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_tu(vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_tu(vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_tu(vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_tu(vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_tu(vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_tu(vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_tu(vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_tu(vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_tu(vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_tu(vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_tu(vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_tu(vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vcpop.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vcpop.c new file mode 100644 index 000000000..7dbb9b78c --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vcpop.c @@ -0,0 +1,354 @@ +#include +#include + +vuint8mf8_t test_vcpop_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_tu(vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_tu(vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_tu(vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_tu(vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_tu(vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_tu(vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_tu(vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_tu(vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_tu(vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_tu(vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_tu(vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_tu(vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vctz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vctz.c new file mode 100644 index 000000000..b191067e8 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vctz.c @@ -0,0 +1,354 @@ +#include +#include + +vuint8mf8_t test_vctz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_tu(vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_tu(vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_tu(vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_tu(vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_tu(vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_tu(vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_tu(vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_tu(vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_tu(vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_tu(vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_tu(vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_tu(vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md index 80603a138..444080442 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md @@ -463,7 +463,287 @@ vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t v [[policy-variant-]] ==== Vector Basic Bit-manipulation - Count Bits -Intrinsics here don't have a policy variant. + +[,c] +---- +vuint8mf8_t __riscv_vclz_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- [[policy-variant-]] ==== Vector Bit-manipulation used in Cryptography - Rotate diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index b44053593..4433b14fb 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -463,7 +463,287 @@ vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t v [[policy-variant-]] ==== Vector Basic Bit-manipulation - Count Bits -Intrinsics here don't have a policy variant. + +[,c] +---- +vuint8mf8_t __riscv_vclz_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- [[policy-variant-]] ==== Vector Bit-manipulation used in Cryptography - Rotate diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c new file mode 100644 index 000000000..d9c132cd7 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c @@ -0,0 +1,365 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vclz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_tu(vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_tu(vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_tu(vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_tu(vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_tu(vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_tu(vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_tu(vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_tu(vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_tu(vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_tu(vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_tu(vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_tu(vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf8_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf4_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u8mf2_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_v_u8m1_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_v_u8m2_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_v_u8m4_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_v_u8m8_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf4_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u16mf2_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_v_u16m1_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_v_u16m2_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_v_u16m4_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_v_u16m8_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_v_u32mf2_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_v_u32m1_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_v_u32m2_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_v_u32m4_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_v_u32m8_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_v_u64m1_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_v_u64m2_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_v_u64m4_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_v_u64m8_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c new file mode 100644 index 000000000..2f89711dc --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c @@ -0,0 +1,358 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vcpop_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_tu(vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_tu(vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_tu(vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_tu(vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_tu(vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_tu(vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_tu(vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_tu(vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_tu(vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_tu(vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_tu(vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_tu(vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf8_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf4_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8mf2_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m1_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m2_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m4_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u8m8_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf4_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16mf2_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m1_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m2_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m4_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u16m8_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32mf2_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m1_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m2_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m4_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u32m8_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m1_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m2_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m4_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_v_u64m8_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c new file mode 100644 index 000000000..54d7ee887 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c @@ -0,0 +1,365 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vctz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_tu(vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_tu(vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_tu(vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_tu(vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_tu(vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_tu(vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_tu(vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_tu(vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_tu(vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_tu(vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_tu(vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_tu(vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_tu(vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_tu(vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_tu(vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_tu(vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf8_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf4_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u8mf2_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_v_u8m1_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_v_u8m2_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_v_u8m4_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_v_u8m8_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf4_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u16mf2_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_v_u16m1_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_v_u16m2_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_v_u16m4_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_v_u16m8_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_v_u32mf2_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_v_u32m1_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_v_u32m2_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_v_u32m4_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_v_u32m8_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_v_u64m1_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_v_u64m2_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_v_u64m4_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_v_u64m8_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclz.c new file mode 100644 index 000000000..e93b008a3 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclz.c @@ -0,0 +1,434 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vclz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vcpop.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vcpop.c new file mode 100644 index 000000000..4eb8efa2b --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vcpop.c @@ -0,0 +1,427 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vcpop_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vctz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vctz.c new file mode 100644 index 000000000..8cecc11d2 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vctz.c @@ -0,0 +1,434 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zvbc \ +// RUN: -target-feature +zvkg \ +// RUN: -target-feature +zvkned \ +// RUN: -target-feature +zvknhb \ +// RUN: -target-feature +zvksed \ +// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vuint8mf8_t test_vctz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclz.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclz.c new file mode 100644 index 000000000..2d8b78be7 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vclz.c @@ -0,0 +1,423 @@ +#include +#include + +vuint8mf8_t test_vclz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vclz_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vclz_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vclz_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vclz_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vcpop.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vcpop.c new file mode 100644 index 000000000..10f897107 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vcpop.c @@ -0,0 +1,423 @@ +#include +#include + +vuint8mf8_t test_vcpop_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vcpop_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { + return __riscv_vcpop_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vcpop_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vctz.c b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vctz.c new file mode 100644 index 000000000..3e0bce679 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded-api-testing/vctz.c @@ -0,0 +1,423 @@ +#include +#include + +vuint8mf8_t test_vctz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { + return __riscv_vctz_tu(vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vctz_tum(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vctz_tumu(vm, vd, vs2, vl); +} + +vuint8mf8_t test_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8mf4_t test_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8mf2_t test_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8m1_t test_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8m2_t test_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8m4_t test_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint8m8_t test_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16mf4_t test_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16mf2_t test_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16m1_t test_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16m2_t test_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16m4_t test_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint16m8_t test_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32mf2_t test_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32m1_t test_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32m2_t test_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32m4_t test_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint32m8_t test_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint64m1_t test_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint64m2_t test_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint64m4_t test_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} + +vuint64m8_t test_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { + return __riscv_vctz_mu(vm, vd, vs2, vl); +} diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md index 559a5b80c..f2f92ae9f 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md @@ -463,7 +463,287 @@ vuint64m8_t __riscv_vrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size [[policy-variant-overloaded]] ==== Vector Basic Bit-manipulation - Count Bits -Intrinsics here don't have a policy variant. + +[,c] +---- +vuint8mf8_t __riscv_vclz_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- [[policy-variant-overloaded]] ==== Vector Bit-manipulation used in Cryptography - Rotate diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index eef98e779..a4d961b88 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -463,7 +463,287 @@ vuint64m8_t __riscv_vrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size [[policy-variant-overloaded]] ==== Vector Basic Bit-manipulation - Count Bits -Intrinsics here don't have a policy variant. + +[,c] +---- +vuint8mf8_t __riscv_vclz_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +---- [[policy-variant-overloaded]] ==== Vector Bit-manipulation used in Cryptography - Rotate diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index 1877e46b9..7635912e1 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -40,7 +40,17 @@ def gen(g): UITYPE, SEWS, LMULS, - decorators.has_masking_no_maskedoff) + decorators.has_masking_maskedoff_policy) + + g.function_group( + vector_crypto_template, + "Vector Basic Bit-manipulation - Vector Population Count", + "", # FIXME: We probably have a separate document for vector-crypto + ["vcpop"], + UITYPE, + SEWS, + LMULS, + decorators.has_masking_maskedoff_policy) g.function_group( vector_crypto_template, From 0a3e7911dde793b8807dca489b04289afa80c67b Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 11 Jun 2024 18:03:24 +0800 Subject: [PATCH 084/151] ci: don't build PDF on pull request Signed-off-by: Jerry Zhang Jian --- .github/workflows/build-pdf.yml | 1 - 1 file changed, 1 deletion(-) diff --git a/.github/workflows/build-pdf.yml b/.github/workflows/build-pdf.yml index d5ac7d630..bde60768f 100644 --- a/.github/workflows/build-pdf.yml +++ b/.github/workflows/build-pdf.yml @@ -4,7 +4,6 @@ on: push: branches: - main - pull_request: release: types: - created From 7ea247483195838fb6f8306838e148f3e03c37c8 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 11 Jun 2024 18:04:12 +0800 Subject: [PATCH 085/151] ci: test generator on pull request Signed-off-by: Jerry Zhang Jian --- .github/workflows/generator.yml | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/.github/workflows/generator.yml b/.github/workflows/generator.yml index 8a64c3121..8e24d7119 100644 --- a/.github/workflows/generator.yml +++ b/.github/workflows/generator.yml @@ -1,6 +1,12 @@ name: rvv-intrinsic-generator -on: [push] +on: + push: + branches: + - main + pull_request: + branches: + - main jobs: build: From 724691e872615c1f1e597ca67fe0c2a021415259 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 11 Jun 2024 18:04:35 +0800 Subject: [PATCH 086/151] ci: drop py 3.7 and 3.8, add 3.11 Signed-off-by: Jerry Zhang Jian --- .github/workflows/generator.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/generator.yml b/.github/workflows/generator.yml index 8e24d7119..c00aba842 100644 --- a/.github/workflows/generator.yml +++ b/.github/workflows/generator.yml @@ -13,7 +13,7 @@ jobs: runs-on: ubuntu-latest strategy: matrix: - python-version: ["3.7", "3.8", "3.9", "3.10"] + python-version: ["3.9", "3.10", "3.11"] steps: - uses: actions/checkout@v3 - name: Set up Python ${{ matrix.python-version }} From ae46b81a3face11003f5180e7c35be921b1b8cb4 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 12 Jun 2024 14:45:16 +0800 Subject: [PATCH 087/151] deps: bump pylint from 2.14.1 to 3.2.3 Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rvv-intrinsic-generator/requirements.txt b/rvv-intrinsic-generator/requirements.txt index 3299a96b8..9e069ba22 100644 --- a/rvv-intrinsic-generator/requirements.txt +++ b/rvv-intrinsic-generator/requirements.txt @@ -1,4 +1,4 @@ junitparser==2.6.0 -pylint==2.14.1 +pylint==3.2.3 yapf pytype From 65a541feb4869bc9802ffa1283573cecee12f54b Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 12 Jun 2024 14:47:30 +0800 Subject: [PATCH 088/151] [NFC] fix C0325: Unnecessary parens after '=' Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index c2e27f798..5407ec0cb 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -486,7 +486,7 @@ def write_file_header(self, has_float_type, has_bfloat16_type, name): """) - vector_crypto_llvm_header = (r"""// REQUIRES: riscv-registered-target + vector_crypto_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ // RUN: -target-feature +zvbb \ // RUN: -target-feature +zvbc \ @@ -498,7 +498,7 @@ def write_file_header(self, has_float_type, has_bfloat16_type, name): // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s -""") +""" def is_vector_crypto_inst(name): vector_crypto_inst = [ From d7e7d8545c38d80af75cb5db81919a3bcff8be82 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 12 Jun 2024 14:53:26 +0800 Subject: [PATCH 089/151] [NFC] fix vector crypto type-check errors by adding assertion Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/templates/vector_crypto_template.py | 1 + 1 file changed, 1 insertion(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 28b5a466a..02644ca97 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -82,6 +82,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): for decorator in decorator_list: decorator.write_text_header(G) for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): + assert args["OP"] is not None op = args["OP"] for operand_mnemonic in operand_mnemonic_dict[op]: if operand_mnemonic in ("vv", "vs"): From 08ef88e5e805c76420108ebd51280019ba79cad2 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 12 Jun 2024 15:30:43 +0800 Subject: [PATCH 090/151] [NFC] fix W0719: Raising too general exception: Exception Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 4 ++-- .../rvv_intrinsic_gen/templates/binary_op_template.py | 2 +- .../templates/get_set_diff_lmul_op_template.py | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 5407ec0cb..e88a1c8d7 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -315,7 +315,7 @@ def __init__(self, f, is_all_in_one, has_tail_policy): if not os.path.exists(self.folder): os.makedirs(self.folder) if not os.path.isdir(self.folder): - raise Exception("%s not dir, but it must be a dir.") + raise FileNotFoundError(f"{self.folder} not dir, but it must be a dir.") self.group_counter = 0 self.fd = None @@ -437,7 +437,7 @@ def __init__(self, f, is_overloaded, toolchain_type, has_tail_policy): if not os.path.exists(self.folder): os.makedirs(self.folder) if not os.path.isdir(self.folder): - raise Exception("%s not dir, but it must be a dir.") + raise FileNotFoundError(f"{self.folder} not dir, but it must be a dir.") self.fd = None self.test_files = [] # test file name candidates which are declared in inst.py, it could have diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py index 444d6e4c5..3410a7d53 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py @@ -104,7 +104,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): elif args["OP2"] == "f": inst_info = inst_info_vf else: - raise Exception("Unknown op2 type.") + raise ValueError("Unknown op2 type.") if op in ["ssra", "sra", "ssrl", "srl", "sll"]: if args["OP2"] == "v": diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py index c43a27ad0..2f4f10638 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py @@ -65,7 +65,7 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): constraint = vset_constraint vget = False else: - raise Exception("Unknown operation") + raise ValueError("Unknown operation") for args in prod( OP=op_list, From a7127a7371caac3a6c76094fe4efb48cb560340a Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 12 Jun 2024 15:35:24 +0800 Subject: [PATCH 091/151] pylintrc: fix runtime warnings Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/.pylintrc | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/rvv-intrinsic-generator/.pylintrc b/rvv-intrinsic-generator/.pylintrc index ae0b04986..721bd54fb 100644 --- a/rvv-intrinsic-generator/.pylintrc +++ b/rvv-intrinsic-generator/.pylintrc @@ -424,7 +424,7 @@ valid-metaclass-classmethod-first-arg=mcs # Exceptions that will emit a warning when being caught. Defaults to # "Exception" -overgeneral-exceptions=StandardError, - Exception, - BaseException +overgeneral-exceptions=builtins.StandardError, + builtins.Exception, + builtins.BaseException From 660753335b523eae72ba4548330a233f385957b0 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 12 Jun 2024 16:37:23 +0800 Subject: [PATCH 092/151] makefile: change doc file extension from .md to .adoc Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/Makefile | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index c789e419a..3e80ab39e 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -166,8 +166,8 @@ gen-gnu-test: gnu-overloaded-test gnu-non-overloaded-test # Generate all-in-one document for non-overloaded intrinsics non-overloaded-doc: - $(call gen_doc,$(DIR),intrinsic_funcs.md,$@,$(EXTRA_FLAG)) - $(call gen_doc,$(POLICY_DIR),intrinsic_funcs.md,$@,--has-policy $(EXTRA_FLAG)) + $(call gen_doc,$(DIR),intrinsic_funcs.adoc,$@,$(EXTRA_FLAG)) + $(call gen_doc,$(POLICY_DIR),intrinsic_funcs.adoc,$@,--has-policy $(EXTRA_FLAG)) # Generate grouped documents for non-overloaded intrinsics non-overloaded-docs: @@ -176,8 +176,8 @@ non-overloaded-docs: # Generate all-in-one document for overloaded intrinsics overloaded-doc: - $(call gen_doc,$(DIR),overloaded_intrinsic_funcs.md,$@,$(EXTRA_FLAG)) - $(call gen_doc,$(POLICY_DIR),overloaded_intrinsic_funcs.md,$@,--has-policy $(EXTRA_FLAG)) + $(call gen_doc,$(DIR),overloaded_intrinsic_funcs.adoc,$@,$(EXTRA_FLAG)) + $(call gen_doc,$(POLICY_DIR),overloaded_intrinsic_funcs.adoc,$@,--has-policy $(EXTRA_FLAG)) # Generate grouped documents for overloaded intrinsics overloaded-docs: From 3d41a151d52136f022a3cc7fc1fe4abf9459801a Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 13 Jun 2024 01:07:02 -0700 Subject: [PATCH 093/151] [Auto-gen] Remove Markdown files under vector-crypto Signed-off-by: Jerry Zhang Jian --- .../vector-crypto/intrinsic_funcs.md | 938 ---------- .../overloaded_intrinsic_funcs.md | 938 ---------- .../policy_funcs/intrinsic_funcs.md | 1572 ----------------- .../overloaded_intrinsic_funcs.md | 1572 ----------------- 4 files changed, 5020 deletions(-) delete mode 100644 auto-generated/vector-crypto/intrinsic_funcs.md delete mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs.md delete mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md delete mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md diff --git a/auto-generated/vector-crypto/intrinsic_funcs.md b/auto-generated/vector-crypto/intrinsic_funcs.md deleted file mode 100644 index 5e8c4df54..000000000 --- a/auto-generated/vector-crypto/intrinsic_funcs.md +++ /dev/null @@ -1,938 +0,0 @@ - -=== Zvbb - Vector Bit-manipulation used in Cryptography - -[[]] -==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not - -[,c] ----- -vuint8mf8_t __riscv_vandn_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8 (vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4 (vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2 (vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1 (vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2 (vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4 (vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8 (vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4 (vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2 (vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1 (vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2 (vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4 (vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8 (vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2 (vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1 (vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2 (vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4 (vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8 (vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); ----- - -[[]] -==== Vector Basic Bit-manipulation - Reverse - -[,c] ----- -vuint8mf8_t __riscv_vbrev_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); ----- - -[[]] -==== Vector Basic Bit-manipulation - Count Bits - -[,c] ----- -vuint8mf8_t __riscv_vclz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8 (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); ----- - -[[]] -==== Vector Basic Bit-manipulation - Vector Population Count - -[,c] ----- -vuint8mf8_t __riscv_vcpop_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8 (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vcpop_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); ----- - -[[]] -==== Vector Bit-manipulation used in Cryptography - Rotate - -[,c] ----- -vuint8mf8_t __riscv_vrol_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); ----- - -[[]] -==== Vector Basic Bit-manipulation used - Widening Shift - -[,c] ----- -vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); ----- - -=== Zvbc - Vector Carryless Multiplication - -[[]] -==== Vector Carryless Multiplication - -[,c] ----- -vuint64m1_t __riscv_vclmul_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); ----- - -=== Zvkg - Vector GCM/GMAC - -[[]] -==== Vector GCM/GMAC - -[,c] ----- -vuint32mf2_t __riscv_vghsh_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vghsh_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vghsh_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vghsh_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vghsh_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vgmul_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vgmul_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vgmul_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vgmul_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -=== Zvkned - NIST Suite: Vector AES Block Cipher - -[[]] -==== Vector AES Encryption - -[,c] ----- -vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -[[]] -==== Vector AES Decryption - -[,c] ----- -vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -[[]] -==== Vector AES-128 Forward KeySchedule generation - -[,c] ----- -vuint32mf2_t __riscv_vaeskf1_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- - -[[]] -==== Vector AES round zero - -[,c] ----- -vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); ----- - -=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash - -[[]] -==== Vector SHA-2 message schedule - -[,c] ----- -vuint32mf2_t __riscv_vsha2ms_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ms_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ms_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ms_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ms_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ms_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ms_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ms_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ms_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ----- - -[[]] -==== Vector SHA-2 two rounds of compression - -[,c] ----- -vuint32mf2_t __riscv_vsha2ch_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ch_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ch_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ch_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ch_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ch_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ch_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ch_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ch_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vsha2cl_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2cl_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2cl_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2cl_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2cl_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2cl_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2cl_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2cl_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2cl_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ----- - -=== Zvksed - ShangMi Suite: SM4 Block Cipher - -[[]] -==== Vector SM4 KeyExpansion - -[,c] ----- -vuint32mf2_t __riscv_vsm4k_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); ----- - -[[]] -==== Vector SM4 Rounds - -[,c] ----- -vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -=== Zvksh - ShangMi Suite: SM3 Secure Hash - -[[]] -==== Vector SM3 Message Expansion - -[,c] ----- -vuint32mf2_t __riscv_vsm3me_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); ----- - -[[]] -==== Vector SM3 Compression - -[,c] ----- -vuint32mf2_t __riscv_vsm3c_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md deleted file mode 100644 index fe4429338..000000000 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.md +++ /dev/null @@ -1,938 +0,0 @@ - -=== Zvbb - Vector Bit-manipulation used in Cryptography - -[[overloaded-]] -==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not - -[,c] ----- -vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn (vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn (vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn (vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn (vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn (vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn (vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn (vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn (vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn (vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn (vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn (vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn (vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn (vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn (vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn (vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn (vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn (vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn (vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn (vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn (vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn (vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn (vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn (vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn (vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn (vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn (vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn (vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn (vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn (vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn (vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn (vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); ----- - -[[overloaded-]] -==== Vector Basic Bit-manipulation - Reverse - -[,c] ----- -vuint8mf8_t __riscv_vbrev (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8 (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8 (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8 (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8 (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8 (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8 (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8 (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8 (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8 (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8 (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8 (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8 (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8 (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8 (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8 (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8 (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8 (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8 (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8 (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8 (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8 (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8 (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8 (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8 (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8 (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8 (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8 (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8 (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8 (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8 (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8 (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8 (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8 (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8 (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8 (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8 (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8 (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8 (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8 (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8 (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8 (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8 (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8 (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); ----- - -[[overloaded-]] -==== Vector Basic Bit-manipulation - Count Bits - -[,c] ----- -vuint8mf8_t __riscv_vclz (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz (vbool8_t vm, vuint64m8_t vs2, size_t vl); ----- - -[[overloaded-]] -==== Vector Basic Bit-manipulation - Vector Population Count - -[,c] ----- -vuint8mf8_t __riscv_vcpop (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop (vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vcpop (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop (vbool8_t vm, vuint64m8_t vs2, size_t vl); ----- - -[[overloaded-]] -==== Vector Bit-manipulation used in Cryptography - Rotate - -[,c] ----- -vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol (vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror (vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); ----- - -[[overloaded-]] -==== Vector Basic Bit-manipulation used - Widening Shift - -[,c] ----- -vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); ----- - -=== Zvbc - Vector Carryless Multiplication - -[[overloaded-]] -==== Vector Carryless Multiplication - -[,c] ----- -vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); ----- - -=== Zvkg - Vector GCM/GMAC - -[[overloaded-]] -==== Vector GCM/GMAC - -[,c] ----- -vuint32mf2_t __riscv_vghsh (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vghsh (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vghsh (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vghsh (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vghsh (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vgmul (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vgmul (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vgmul (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vgmul (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -=== Zvkned - NIST Suite: Vector AES Block Cipher - -[[overloaded-]] -==== Vector AES Encryption - -[,c] ----- -vuint32mf2_t __riscv_vaesef_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -[[overloaded-]] -==== Vector AES Decryption - -[,c] ----- -vuint32mf2_t __riscv_vaesdf_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -[[overloaded-]] -==== Vector AES-128 Forward KeySchedule generation - -[,c] ----- -vuint32mf2_t __riscv_vaeskf1 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1 (vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- - -[[overloaded-]] -==== Vector AES round zero - -[,c] ----- -vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); ----- - -=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash - -[[overloaded-]] -==== Vector SHA-2 message schedule - -[,c] ----- -vuint32mf2_t __riscv_vsha2ms (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ms (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ms (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ms (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ms (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ms (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ms (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ms (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ms (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ----- - -[[overloaded-]] -==== Vector SHA-2 two rounds of compression - -[,c] ----- -vuint32mf2_t __riscv_vsha2ch (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ch (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ch (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ch (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ch (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ch (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ch (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ch (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ch (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vsha2cl (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2cl (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2cl (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2cl (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2cl (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2cl (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2cl (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2cl (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2cl (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ----- - -=== Zvksed - ShangMi Suite: SM4 Block Cipher - -[[overloaded-]] -==== Vector SM4 KeyExpansion - -[,c] ----- -vuint32mf2_t __riscv_vsm4k (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); ----- - -[[overloaded-]] -==== Vector SM4 Rounds - -[,c] ----- -vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -=== Zvksh - ShangMi Suite: SM3 Secure Hash - -[[overloaded-]] -==== Vector SM3 Message Expansion - -[,c] ----- -vuint32mf2_t __riscv_vsm3me (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); ----- - -[[overloaded-]] -==== Vector SM3 Compression - -[,c] ----- -vuint32mf2_t __riscv_vsm3c (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md deleted file mode 100644 index 444080442..000000000 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.md +++ /dev/null @@ -1,1572 +0,0 @@ - -=== Zvbb - Vector Bit-manipulation used in Cryptography - -[[policy-variant-]] -==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not - -[,c] ----- -vuint8mf8_t __riscv_vandn_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); ----- - -[[policy-variant-]] -==== Vector Basic Bit-manipulation - Reverse - -[,c] ----- -vuint8mf8_t __riscv_vbrev_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); ----- - -[[policy-variant-]] -==== Vector Basic Bit-manipulation - Count Bits - -[,c] ----- -vuint8mf8_t __riscv_vclz_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); ----- - -[[policy-variant-]] -==== Vector Basic Bit-manipulation - Vector Population Count - -[,c] ----- -vuint8mf8_t __riscv_vcpop_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vcpop_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vcpop_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vcpop_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); ----- - -[[policy-variant-]] -==== Vector Bit-manipulation used in Cryptography - Rotate - -[,c] ----- -vuint8mf8_t __riscv_vrol_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); ----- - -[[policy-variant-]] -==== Vector Basic Bit-manipulation used - Widening Shift - -[,c] ----- -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); ----- - -=== Zvbc - Vector Carryless Multiplication - -[[policy-variant-]] -==== Vector Carryless Multiplication - -[,c] ----- -vuint64m1_t __riscv_vclmul_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); ----- - -=== Zvkg - Vector GCM/GMAC - -[[policy-variant-]] -==== Vector GCM/GMAC - -[,c] ----- -vuint32mf2_t __riscv_vghsh_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vghsh_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vghsh_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vghsh_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vghsh_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vgmul_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vgmul_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vgmul_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vgmul_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -=== Zvkned - NIST Suite: Vector AES Block Cipher - -[[policy-variant-]] -==== Vector AES Encryption - -[,c] ----- -vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -[[policy-variant-]] -==== Vector AES Decryption - -[,c] ----- -vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -[[policy-variant-]] -==== Vector AES-128 Forward KeySchedule generation - -[,c] ----- -vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- - -[[policy-variant-]] -==== Vector AES round zero - -[,c] ----- -vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); ----- - -=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash - -[[policy-variant-]] -==== Vector SHA-2 message schedule - -[,c] ----- -vuint32mf2_t __riscv_vsha2ms_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ms_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ms_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ms_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ms_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ms_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ms_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ms_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ----- - -[[policy-variant-]] -==== Vector SHA-2 two rounds of compression - -[,c] ----- -vuint32mf2_t __riscv_vsha2ch_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ch_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ch_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ch_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ch_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ch_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ch_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ch_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ch_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vsha2cl_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2cl_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2cl_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2cl_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2cl_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2cl_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ----- - -=== Zvksed - ShangMi Suite: SM4 Block Cipher - -[[policy-variant-]] -==== Vector SM4 KeyExpansion - -[,c] ----- -vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- - -[[policy-variant-]] -==== Vector SM4 Rounds - -[,c] ----- -vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -=== Zvksh - ShangMi Suite: SM3 Secure Hash - -[[policy-variant-]] -==== Vector SM3 Message Expansion - -[,c] ----- -vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); ----- - -[[policy-variant-]] -==== Vector SM3 Compression - -[,c] ----- -vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md deleted file mode 100644 index f2f92ae9f..000000000 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.md +++ /dev/null @@ -1,1572 +0,0 @@ - -=== Zvbb - Vector Bit-manipulation used in Cryptography - -[[policy-variant-overloaded]] -==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not - -[,c] ----- -vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tu (vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tu (vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tu (vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tu (vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tu (vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tu (vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tu (vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tu (vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tu (vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tu (vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tu (vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tu (vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vandn_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector Basic Bit-manipulation - Reverse - -[,c] ----- -vuint8mf8_t __riscv_vbrev_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vbrev_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector Basic Bit-manipulation - Count Bits - -[,c] ----- -vuint8mf8_t __riscv_vclz_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vclz_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector Basic Bit-manipulation - Vector Population Count - -[,c] ----- -vuint8mf8_t __riscv_vcpop_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vcpop_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vcpop_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -// masked functions -vuint8mf8_t __riscv_vcpop_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector Bit-manipulation used in Cryptography - Rotate - -[,c] ----- -vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -// masked functions -vuint8mf8_t __riscv_vrol_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector Basic Bit-manipulation used - Widening Shift - -[,c] ----- -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -// masked functions -vuint16mf4_t __riscv_vwsll_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); ----- - -=== Zvbc - Vector Carryless Multiplication - -[[policy-variant-overloaded]] -==== Vector Carryless Multiplication - -[,c] ----- -vuint64m1_t __riscv_vclmul_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -// masked functions -vuint64m1_t __riscv_vclmul_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); ----- - -=== Zvkg - Vector GCM/GMAC - -[[policy-variant-overloaded]] -==== Vector GCM/GMAC - -[,c] ----- -vuint32mf2_t __riscv_vghsh_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vghsh_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vghsh_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vghsh_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vghsh_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vgmul_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vgmul_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vgmul_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vgmul_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -=== Zvkned - NIST Suite: Vector AES Block Cipher - -[[policy-variant-overloaded]] -==== Vector AES Encryption - -[,c] ----- -vuint32mf2_t __riscv_vaesef_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector AES Decryption - -[,c] ----- -vuint32mf2_t __riscv_vaesdf_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector AES-128 Forward KeySchedule generation - -[,c] ----- -vuint32mf2_t __riscv_vaeskf1_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector AES round zero - -[,c] ----- -vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); ----- - -=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash - -[[policy-variant-overloaded]] -==== Vector SHA-2 message schedule - -[,c] ----- -vuint32mf2_t __riscv_vsha2ms_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ms_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ms_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ms_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ms_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ms_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ms_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ms_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ms_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector SHA-2 two rounds of compression - -[,c] ----- -vuint32mf2_t __riscv_vsha2ch_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ch_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ch_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ch_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ch_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ch_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ch_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ch_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ch_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vsha2cl_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2cl_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2cl_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2cl_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2cl_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2cl_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2cl_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2cl_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2cl_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); ----- - -=== Zvksed - ShangMi Suite: SM4 Block Cipher - -[[policy-variant-overloaded]] -==== Vector SM4 KeyExpansion - -[,c] ----- -vuint32mf2_t __riscv_vsm4k_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector SM4 Rounds - -[,c] ----- -vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ----- - -=== Zvksh - ShangMi Suite: SM3 Secure Hash - -[[policy-variant-overloaded]] -==== Vector SM3 Message Expansion - -[,c] ----- -vuint32mf2_t __riscv_vsm3me_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); ----- - -[[policy-variant-overloaded]] -==== Vector SM3 Compression - -[,c] ----- -vuint32mf2_t __riscv_vsm3c_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); ----- From b8d797cf7a537c27ff091071cb68f94d08542b67 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 13 Jun 2024 19:55:37 -0700 Subject: [PATCH 094/151] makefile: add vector-crypto build targets Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/Makefile | 90 ++++++++++++++++++++++++++++---- 1 file changed, 80 insertions(+), 10 deletions(-) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index 3e80ab39e..a8701114f 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -68,6 +68,10 @@ POLICY_DIR := $(DIR)/policy_funcs BF16_DIR := $(DIR)/bfloat16 # Output directory for bfloat16 policy intrinsics BF16_POLICY_DIR := $(BF16_DIR)/policy_funcs +# Output directory for vector-crypto non-policy intrinsics +VECTOR_CRYPTO_DIR := $(DIR)/vector-crypto +# Output directory for vector-crypto policy intrinsics +VECTOR_CRYPTO_POLICY_DIR := $(VECTOR_CRYPTO_DIR)/policy_funcs # Directory that stores the v0.10 unit tests LEGACY_API_TESTS_DIR := $(abspath ../legacy-api-unit-tests) # Derived variable to trigger option --vendor-inst @@ -148,20 +152,24 @@ endef # If VENDOR_GENERATOR_SCRIPT is defined, also trigger it in all. # NOTE: A possible enhancement to this is allow multiple targets be added here ifdef VENDOR_GENERATOR_SCRIPT -all: gen-document gen-test gen-compatible-header bf16-all vendor-generator +all: gen-document gen-test gen-compatible-header bf16-all vector-crypto-all vendor-generator else -all: gen-document gen-test gen-compatible-header bf16-all +all: gen-document gen-test gen-compatible-header bf16-all vector-crypto-all endif bf16-all: gen-bf16-document gen-bf16-test +vector-crypto-all: gen-vector-crypto-document gen-vector-crypto-test gen-document: non-overloaded-doc non-overloaded-docs overloaded-doc overloaded-docs gen-bf16-document: bf16-non-overloaded-doc bf16-non-overloaded-docs bf16-overloaded-doc bf16-overloaded-docs +gen-vector-crypto-document: vector-crypto-non-overloaded-doc vector-crypto-non-overloaded-docs vector-crypto-overloaded-doc vector-crypto-overloaded-docs gen-test: non-overloaded-test overloaded-test gen-llvm-test gen-gnu-test gen-bf16-test: bf16-non-overloaded-test bf16-overloaded-test gen-bf16-llvm-test +gen-vector-crypto-test: vector-crypto-non-overloaded-test vector-crypto-overloaded-test gen-vector-crypto-llvm-test gen-compatible-header: non-policy-compatible-header policy-compatible-header non-policy-overloaded-compatible-header policy-overloaded-compatible-header gen-llvm-test: llvm-non-overloaded-test llvm-overloaded-test gen-bf16-llvm-test: bf16-llvm-non-overloaded-test bf16-llvm-overloaded-test +gen-vector-crypto-llvm-test: vector-crypto-llvm-non-overloaded-test vector-crypto-llvm-overloaded-test gen-gnu-test: gnu-overloaded-test gnu-non-overloaded-test # Generate all-in-one document for non-overloaded intrinsics @@ -280,6 +288,62 @@ bf16-llvm-overloaded-test: clang-format -i $(BF16_DIR)/llvm-overloaded-tests/* clang-format -i $(BF16_POLICY_DIR)/llvm-overloaded-tests/* +# Vector crypto documents +vector-crypto-non-overloaded-doc: + $(call gen_doc,$(VECTOR_CRYPTO_DIR),intrinsic_funcs.adoc,non-overloaded-doc,--gen-vector-crypto $(EXTRA_FLAG)) + $(call gen_doc,$(VECTOR_CRYPTO_POLICY_DIR),intrinsic_funcs.adoc,non-overloaded-doc,--gen-vector-crypto --has-policy $(EXTRA_FLAG)) + $(call clang_format_adoc, --file, $(VECTOR_CRYPTO_DIR)/intrinsic_funcs.adoc) + $(call clang_format_adoc, --file, $(VECTOR_CRYPTO_POLICY_DIR)/intrinsic_funcs.adoc) + +vector-crypto-non-overloaded-docs: + $(call gen_doc,$(VECTOR_CRYPTO_DIR),intrinsic_funcs,non-overloaded-docs,--gen-vector-crypto $(EXTRA_FLAG)) + $(call gen_doc,$(VECTOR_CRYPTO_POLICY_DIR),intrinsic_funcs,non-overloaded-docs,--gen-vector-crypto --has-policy $(EXTRA_FLAG)) + $(call clang_format_adoc, --folder, $(VECTOR_CRYPTO_DIR)/intrinsic_funcs) + $(call clang_format_adoc, --folder, $(VECTOR_CRYPTO_POLICY_DIR)/intrinsic_funcs) + +vector-crypto-overloaded-doc: + $(call gen_doc,$(VECTOR_CRYPTO_DIR),overloaded_intrinsic_funcs.adoc,overloaded-doc,--gen-vector-crypto $(EXTRA_FLAG)) + $(call gen_doc,$(VECTOR_CRYPTO_POLICY_DIR),overloaded_intrinsic_funcs.adoc,overloaded-doc,--gen-vector-crypto --has-policy $(EXTRA_FLAG)) + $(call clang_format_adoc, --file, $(VECTOR_CRYPTO_DIR)/overloaded_intrinsic_funcs.adoc) + $(call clang_format_adoc, --file, $(VECTOR_CRYPTO_POLICY_DIR)/overloaded_intrinsic_funcs.adoc) + +vector-crypto-overloaded-docs: + $(call gen_doc,$(VECTOR_CRYPTO_DIR),overloaded_intrinsic_funcs,overloaded-docs,--gen-vector-crypto $(EXTRA_FLAG)) + $(call gen_doc,$(VECTOR_CRYPTO_POLICY_DIR),overloaded_intrinsic_funcs,overloaded-docs,--gen-vector-crypto --has-policy $(EXTRA_FLAG)) + $(call clang_format_adoc, --folder, $(VECTOR_CRYPTO_DIR)/overloaded_intrinsic_funcs) + $(call clang_format_adoc, --folder, $(VECTOR_CRYPTO_POLICY_DIR)/overloaded_intrinsic_funcs) + +# Vector-crypto tests +vector-crypto-non-overloaded-test: + $(call gen_tests,$(VECTOR_CRYPTO_DIR)/api-testing,non-overloaded-test,--gen-vector-crypto $(EXTRA_FLAG)) + $(call gen_tests,$(VECTOR_CRYPTO_POLICY_DIR)/api-testing,non-overloaded-test,--gen-vector-crypto --has-policy $(EXTRA_FLAG)) + clang-format -i $(VECTOR_CRYPTO_DIR)/api-testing/* + clang-format -i $(VECTOR_CRYPTO_POLICY_DIR)/api-testing/* + +vector-crypto-overloaded-test: + $(call gen_tests,$(VECTOR_CRYPTO_DIR)/overloaded-api-testing,overloaded-test,--gen-vector-crypto $(EXTRA_FLAG)) + $(call gen_tests,$(VECTOR_CRYPTO_POLICY_DIR)/overloaded-api-testing,overloaded-test,--gen-vector-crypto --has-policy $(EXTRA_FLAG)) + clang-format -i $(VECTOR_CRYPTO_DIR)/overloaded-api-testing/* + clang-format -i $(VECTOR_CRYPTO_POLICY_DIR)/overloaded-api-testing/* + +vector-crypto-llvm-non-overloaded-test: + $(call gen_tests,$(VECTOR_CRYPTO_DIR)/llvm-api-tests,non-overloaded-test,--toolchain-type llvm --gen-vector-crypto $(EXTRA_FLAG)) + $(call gen_tests,$(VECTOR_CRYPTO_POLICY_DIR)/llvm-api-tests,non-overloaded-test,--toolchain-type llvm --gen-vector-crypto --has-policy $(EXTRA_FLAG)) + $(call replace_float, $(VECTOR_CRYPTO_DIR)/llvm-api-tests) + $(call replace_float, $(VECTOR_CRYPTO_POLICY_DIR)/llvm-api-tests) + clang-format -i $(VECTOR_CRYPTO_DIR)/llvm-api-tests/* + clang-format -i $(VECTOR_CRYPTO_POLICY_DIR)/llvm-api-tests/* + +vector-crypto-llvm-overloaded-test: + $(call gen_tests,$(VECTOR_CRYPTO_DIR)/llvm-overloaded-tests,overloaded-test,--toolchain-type llvm --gen-vector-crypto $(EXTRA_FLAG)) + $(call gen_tests,$(VECTOR_CRYPTO_POLICY_DIR)/llvm-overloaded-tests,overloaded-test,--toolchain-type llvm --gen-vector-crypto --has-policy $(EXTRA_FLAG)) + $(call replace_float, $(VECTOR_CRYPTO_DIR)/llvm-overloaded-tests) + $(call replace_float, $(VECTOR_CRYPTO_POLICY_DIR)/llvm-overloaded-tests) + clang-format -i $(VECTOR_CRYPTO_DIR)/llvm-overloaded-tests/* + clang-format -i $(VECTOR_CRYPTO_POLICY_DIR)/llvm-overloaded-tests/* + +############################################################################### + # Generate the adaptor header for v0.10 non-policy-compatible-header: $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,non-policy.h,non-overloaded-compatible-header,$(EXTRA_FLAG)) @@ -311,6 +375,10 @@ git-commit-bf16-all: make git-commit-autogen-bf16-doc OUTPUT_DIR=${OUTPUT_DIR} make git-commit-autogen-bf16-test OUTPUT_DIR=${OUTPUT_DIR} +git-commit-vector-crypto-all: + make git-commit-autogen-vector-crypto-doc OUTPUT_DIR=${OUTPUT_DIR} + make git-commit-autogen-vector-crypto-test OUTPUT_DIR=${OUTPUT_DIR} + # Update and commit all documents under auto-generated git-commit-autogen-doc: make gen-document OUTPUT_DIR=${OUTPUT_DIR} @@ -322,6 +390,11 @@ git-commit-autogen-bf16-doc: git add ${BF16_DIR}/* git commit -m "[Auto-gen] Update bfloat16 documents under ${OUTPUT_DIR}. (make git-commit-autogen-bf16-doc)" +git-commit-autogen-vector-crypto-doc: + make gen-vector-crypto-document OUTPUT_DIR=${OUTPUT_DIR} + git add ${VECTOR_CRYPTO_DIR}/* + git commit -m "[Auto-gen] Update vector crypto documents under ${OUTPUT_DIR}. (make git-commit-autogen-vector-crypto-doc)" + # Update and commit all testing C source files under auto-generated git-commit-autogen-test: make gen-test @@ -333,6 +406,11 @@ git-commit-autogen-bf16-test: git add ${BF16_DIR}/* git commit -m "[Auto-gen] Update bfloat16 tests under ${OUTPUT_DIR}. (make git-commit-autogen-bf16-test)" +git-commit-autogen-vector-crypto-test: + make gen-vector-crypto-test + git add ${VECTOR_CRYPTO_DIR}/* + git commit -m "[Auto-gen] Update vector crypto tests under ${OUTPUT_DIR}. (make git-commit-autogen-vector-crypto-test)" + # Update and commit compatible headers under auto-generated git-commit-autogen-compatible-header: make gen-compatible-header @@ -346,14 +424,6 @@ diff-autogen: $(call check_defined, TEST_DIR, output directory for documents/tests generation) rm -rf ${abspath ${TEST_DIR}} make OUTPUT_DIR=${TEST_DIR} - make EXTRA_FLAG=--gen-vector-crypto OUTPUT_DIR=${TEST_DIR}/vector-crypto - -# Remove redundant folder created for vector crypto. The reason this line is -# needed is because the targets in this Makefile to generate compatible header -# creates a folder in prior before running the script. The vector crypto, -# however, does not need compatible header because it does not exist before -# v0.10. - rm -rf ${TEST_DIR}/vector-crypto/rvv-v0p10-compatible-headers diff -qr ${TEST_DIR} ${GOLDEN_DIR} From 30f874580425c10d5918262b4cf6f1f0e4f539d7 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 13 Jun 2024 19:56:14 -0700 Subject: [PATCH 095/151] [Auto-gen] Update vector crypto documents under ../auto-generated. (make git-commit-autogen-vector-crypto-doc) --- .../vector-crypto/intrinsic_funcs.adoc | 1288 ++++++ ...bit-manipulation_used_in_cryptography.adoc | 1364 +++--- ...vbc_-_vector_carryless_multiplication.adoc | 88 +- .../02_zvkg_-_vector_gcm_gmac.adoc | 26 +- ...-_nist_suite:_vector_aes_block_cipher.adoc | 280 +- ..._nist_suite:_vector_sha-2_secure_hash.adoc | 81 +- ...sed_-_shangmi_suite:_sm4_block_cipher.adoc | 63 +- ...vksh_-_shangmi_suite:_sm3_secure_hash.adoc | 30 +- .../overloaded_intrinsic_funcs.adoc | 1098 +++++ ...bit-manipulation_used_in_cryptography.adoc | 1278 +++--- ...vbc_-_vector_carryless_multiplication.adoc | 80 +- .../02_zvkg_-_vector_gcm_gmac.adoc | 25 +- ...-_nist_suite:_vector_aes_block_cipher.adoc | 205 +- ..._nist_suite:_vector_sha-2_secure_hash.adoc | 81 +- ...sed_-_shangmi_suite:_sm4_block_cipher.adoc | 48 +- ...vksh_-_shangmi_suite:_sm3_secure_hash.adoc | 25 +- .../policy_funcs/intrinsic_funcs.adoc | 3245 ++++++++++++++ ...bit-manipulation_used_in_cryptography.adoc | 3742 +++++++++++------ ...vbc_-_vector_carryless_multiplication.adoc | 240 +- .../02_zvkg_-_vector_gcm_gmac.adoc | 30 +- ...-_nist_suite:_vector_aes_block_cipher.adoc | 300 +- ..._nist_suite:_vector_sha-2_secure_hash.adoc | 81 +- ...sed_-_shangmi_suite:_sm4_block_cipher.adoc | 72 +- ...vksh_-_shangmi_suite:_sm3_secure_hash.adoc | 30 +- .../overloaded_intrinsic_funcs.adoc | 2737 ++++++++++++ ...bit-manipulation_used_in_cryptography.adoc | 3396 +++++++++------ ...vbc_-_vector_carryless_multiplication.adoc | 192 +- .../02_zvkg_-_vector_gcm_gmac.adoc | 25 +- ...-_nist_suite:_vector_aes_block_cipher.adoc | 210 +- ..._nist_suite:_vector_sha-2_secure_hash.adoc | 81 +- ...sed_-_shangmi_suite:_sm4_block_cipher.adoc | 53 +- ...vksh_-_shangmi_suite:_sm3_secure_hash.adoc | 30 +- 32 files changed, 16120 insertions(+), 4404 deletions(-) create mode 100644 auto-generated/vector-crypto/intrinsic_funcs.adoc create mode 100644 auto-generated/vector-crypto/overloaded_intrinsic_funcs.adoc create mode 100644 auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.adoc create mode 100644 auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.adoc diff --git a/auto-generated/vector-crypto/intrinsic_funcs.adoc b/auto-generated/vector-crypto/intrinsic_funcs.adoc new file mode 100644 index 000000000..cb14f0942 --- /dev/null +++ b/auto-generated/vector-crypto/intrinsic_funcs.adoc @@ -0,0 +1,1288 @@ + +=== Zvbb - Vector Bit-manipulation used in Cryptography + +[[]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not + +[,c] +---- +vuint8mf8_t __riscv_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, + size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, + size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, + size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, + size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation - Reverse + +[,c] +---- +vuint8mf8_t __riscv_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation - Count Bits + +[,c] +---- +vuint8mf8_t __riscv_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8(vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8(vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- +vuint8mf8_t __riscv_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl); +---- + +[[]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- +vuint16mf4_t __riscv_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl); +---- + +=== Zvbc - Vector Carryless Multiplication + +[[]] +==== Vector Carryless Multiplication + +[,c] +---- +vuint64m1_t __riscv_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +---- + +=== Zvkg - Vector GCM/GMAC + +[[]] +==== Vector GCM/GMAC + +[,c] +---- +vuint32mf2_t __riscv_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +=== Zvkned - NIST Suite: Vector AES Block Cipher + +[[]] +==== Vector AES Encryption + +[,c] +---- +vuint32mf2_t __riscv_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +[[]] +==== Vector AES Decryption + +[,c] +---- +vuint32mf2_t __riscv_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +[[]] +==== Vector AES-128 Forward KeySchedule generation + +[,c] +---- +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); +---- + +[[]] +==== Vector AES round zero + +[,c] +---- +vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +---- + +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash + +[[]] +==== Vector SHA-2 message schedule + +[,c] +---- +vuint32mf2_t __riscv_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +---- + +[[]] +==== Vector SHA-2 two rounds of compression + +[,c] +---- +vuint32mf2_t __riscv_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +---- + +=== Zvksed - ShangMi Suite: SM4 Block Cipher + +[[]] +==== Vector SM4 KeyExpansion + +[,c] +---- +vuint32mf2_t __riscv_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl); +---- + +[[]] +==== Vector SM4 Rounds + +[,c] +---- +vuint32mf2_t __riscv_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +=== Zvksh - ShangMi Suite: SM3 Secure Hash + +[[]] +==== Vector SM3 Message Expansion + +[,c] +---- +vuint32mf2_t __riscv_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +---- + +[[]] +==== Vector SM3 Compression + +[,c] +---- +vuint32mf2_t __riscv_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); +---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index be1bbf32e..3ea6f28c5 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -6,95 +6,142 @@ [,c] ---- -vuint8mf8_t __riscv_vandn_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8 (vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4 (vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2 (vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1 (vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2 (vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4 (vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8 (vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4 (vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2 (vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1 (vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2 (vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4 (vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8 (vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2 (vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1 (vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2 (vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4 (vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8 (vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8(vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4(vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2(vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1(vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2(vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4(vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2(vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1(vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2(vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4(vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2(vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1(vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2(vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4(vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8(vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, + size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, + size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, + size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, + size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl); ---- [[]] @@ -102,139 +149,148 @@ vuint64m8_t __riscv_vandn_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1 [,c] ---- -vuint8mf8_t __riscv_vbrev_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8(vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- [[]] @@ -242,95 +298,95 @@ vuint64m8_t __riscv_vrev8_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); [,c] ---- -vuint8mf8_t __riscv_vclz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8(vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- [[]] @@ -338,51 +394,54 @@ vuint64m8_t __riscv_vctz_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); [,c] ---- -vuint8mf8_t __riscv_vcpop_v_u8mf8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_v_u8mf8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8(vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vcpop_v_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_v_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- [[]] @@ -390,183 +449,277 @@ vuint64m8_t __riscv_vcpop_v_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t vl); [,c] ---- -vuint8mf8_t __riscv_vrol_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8 (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8 (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8 (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8 (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4 (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8 (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1 (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2 (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4 (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8 (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4(vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8(vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1(vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2(vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4(vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_m (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_m (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_m (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_m (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_m (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_m (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl); ---- [[]] @@ -574,65 +727,100 @@ vuint64m8_t __riscv_vror_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, size_t rs1, s [,c] ---- -vuint16mf4_t __riscv_vwsll_vv_u16mf4 (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4 (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2 (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2 (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1 (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1 (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2 (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2 (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4 (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4 (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8 (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8 (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2 (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2 (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1 (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1 (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2 (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2 (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4 (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4 (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8 (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8 (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1 (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2 (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4 (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8 (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2(vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_m (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_m (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_m (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_m (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_m (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_m (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_m (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_m (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_m (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_m (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_m (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_m (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_m (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_m (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_m (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_m (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_m (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_m (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_m (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_m (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_m (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_m (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_m (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_m (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_m (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_m (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_m (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_m (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_m (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_m (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc b/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc index 6e9c0a1b9..c241d9c9f 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc @@ -6,37 +6,61 @@ [,c] ---- -vuint64m1_t __riscv_vclmul_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1 (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1 (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2 (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2 (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4 (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4 (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8 (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8 (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2(vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4(vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8(vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_m (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_m (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_m (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_m (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_m (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_m (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_m (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_m (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + uint64_t rs1, size_t vl); ---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc b/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc index 83f9816cb..6dc612c83 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc @@ -6,14 +6,20 @@ [,c] ---- -vuint32mf2_t __riscv_vghsh_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vghsh_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vghsh_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vghsh_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vghsh_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vgmul_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vgmul_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vgmul_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vgmul_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vgmul_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vgmul_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc index 929328cba..58b7cee3b 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc @@ -6,44 +6,74 @@ [,c] ---- -vuint32mf2_t __riscv_vaesef_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- [[]] @@ -51,44 +81,74 @@ vuint32m8_t __riscv_vaesem_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl) [,c] ---- -vuint32mf2_t __riscv_vaesdf_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- [[]] @@ -96,16 +156,22 @@ vuint32m8_t __riscv_vaesdm_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl) [,c] ---- -vuint32mf2_t __riscv_vaeskf1_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); ---- [[]] @@ -113,18 +179,32 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t ui [,c] ---- -vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc b/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc index 6ce0c9cf6..1e4e030fc 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc @@ -6,15 +6,24 @@ [,c] ---- -vuint32mf2_t __riscv_vsha2ms_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ms_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ms_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ms_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ms_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ms_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ms_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ms_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ms_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); ---- [[]] @@ -22,22 +31,40 @@ vuint64m8_t __riscv_vsha2ms_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8 [,c] ---- -vuint32mf2_t __riscv_vsha2ch_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ch_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ch_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ch_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ch_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ch_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ch_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ch_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ch_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vsha2cl_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2cl_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2cl_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2cl_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2cl_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2cl_vv_u64m1 (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2cl_vv_u64m2 (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2cl_vv_u64m4 (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2cl_vv_u64m8 (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); ---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc index 55a267250..799221682 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc @@ -6,11 +6,11 @@ [,c] ---- -vuint32mf2_t __riscv_vsm4k_vi_u32mf2 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_vi_u32m1 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_vi_u32m2 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_vi_u32m4 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1(vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2(vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4(vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8(vuint32m8_t vs2, size_t uimm, size_t vl); ---- [[]] @@ -18,23 +18,38 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8 (vuint32m8_t vs2, size_t uimm, size_t vl); [,c] ---- -vuint32mf2_t __riscv_vsm4r_vv_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1 (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2 (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4 (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8 (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vv_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2 (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4 (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8 (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vv_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4 (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8 (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vv_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8 (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vv_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- diff --git a/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc b/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc index a83f0b809..d9f4983af 100644 --- a/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc +++ b/auto-generated/vector-crypto/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc @@ -6,11 +6,16 @@ [,c] ---- -vuint32mf2_t __riscv_vsm3me_vv_u32mf2 (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_vv_u32m1 (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_vv_u32m2 (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_vv_u32m4 (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1(vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2(vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4(vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8(vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); ---- [[]] @@ -18,9 +23,14 @@ vuint32m8_t __riscv_vsm3me_vv_u32m8 (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl [,c] ---- -vuint32mf2_t __riscv_vsm3c_vi_u32mf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c_vi_u32m1 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c_vi_u32m2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c_vi_u32m4 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c_vi_u32m8 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.adoc new file mode 100644 index 000000000..0dc87cdce --- /dev/null +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs.adoc @@ -0,0 +1,1098 @@ + +=== Zvbb - Vector Bit-manipulation used in Cryptography + +[[overloaded-]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not + +[,c] +---- +vuint8mf8_t __riscv_vandn(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn(vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn(vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn(vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn(vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn(vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn(vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn(vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn(vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn(vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn(vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn(vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn(vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn(vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn(vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn(vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn(vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn(vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn(vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn(vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, + size_t vl); +vuint8mf4_t __riscv_vandn(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, + size_t vl); +vuint8mf2_t __riscv_vandn(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, + size_t vl); +vuint8m1_t __riscv_vandn(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vandn(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vandn(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vandn(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vandn(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn(vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn(vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, + size_t vl); +vuint16m2_t __riscv_vandn(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn(vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, + size_t vl); +vuint32m2_t __riscv_vandn(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, + size_t vl); +vuint32m4_t __riscv_vandn(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vandn(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vandn(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vandn(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation - Reverse + +[,c] +---- +vuint8mf8_t __riscv_vbrev(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8(vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8(vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation - Count Bits + +[,c] +---- +vuint8mf8_t __riscv_vclz(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz(vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz(vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop(vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop(vbool8_t vm, vuint64m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- +vuint8mf8_t __riscv_vrol(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol(vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol(vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol(vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol(vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol(vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol(vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol(vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol(vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror(vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror(vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror(vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror(vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror(vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror(vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror(vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror(vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vrol(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vrol(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vrol(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vror(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vror(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vror(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +---- + +[[overloaded-]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- +vuint16mf4_t __riscv_vwsll(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll(vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vwsll(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vwsll(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vwsll(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vwsll(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +---- + +=== Zvbc - Vector Carryless Multiplication + +[[overloaded-]] +==== Vector Carryless Multiplication + +[,c] +---- +vuint64m1_t __riscv_vclmul(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul(vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh(vuint64m8_t vs2, uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +---- + +=== Zvkg - Vector GCM/GMAC + +[[overloaded-]] +==== Vector GCM/GMAC + +[,c] +---- +vuint32mf2_t __riscv_vghsh(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vghsh(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vghsh(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vghsh(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vghsh(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32mf2_t __riscv_vgmul(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +=== Zvkned - NIST Suite: Vector AES Block Cipher + +[[overloaded-]] +==== Vector AES Encryption + +[,c] +---- +vuint32mf2_t __riscv_vaesef_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector AES Decryption + +[,c] +---- +vuint32mf2_t __riscv_vaesdf_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +[[overloaded-]] +==== Vector AES-128 Forward KeySchedule generation + +[,c] +---- +vuint32mf2_t __riscv_vaeskf1(vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1(vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1(vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1(vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1(vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vaeskf2(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vaeskf2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vaeskf2(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vaeskf2(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); +---- + +[[overloaded-]] +==== Vector AES round zero + +[,c] +---- +vuint32mf2_t __riscv_vaesz(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +---- + +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash + +[[overloaded-]] +==== Vector SHA-2 message schedule + +[,c] +---- +vuint32mf2_t __riscv_vsha2ms(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2ms(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2ms(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2ms(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2ms(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2ms(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2ms(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2ms(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +---- + +[[overloaded-]] +==== Vector SHA-2 two rounds of compression + +[,c] +---- +vuint32mf2_t __riscv_vsha2ch(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2ch(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2ch(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2ch(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2ch(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2ch(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2ch(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2ch(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint32mf2_t __riscv_vsha2cl(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2cl(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2cl(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2cl(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2cl(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2cl(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2cl(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2cl(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +---- + +=== Zvksed - ShangMi Suite: SM4 Block Cipher + +[[overloaded-]] +==== Vector SM4 KeyExpansion + +[,c] +---- +vuint32mf2_t __riscv_vsm4k(vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k(vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k(vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k(vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k(vuint32m8_t vs2, size_t uimm, size_t vl); +---- + +[[overloaded-]] +==== Vector SM4 Rounds + +[,c] +---- +vuint32mf2_t __riscv_vsm4r_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +=== Zvksh - ShangMi Suite: SM3 Secure Hash + +[[overloaded-]] +==== Vector SM3 Message Expansion + +[,c] +---- +vuint32mf2_t __riscv_vsm3me(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +---- + +[[overloaded-]] +==== Vector SM3 Compression + +[,c] +---- +vuint32mf2_t __riscv_vsm3c(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vsm3c(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vsm3c(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vsm3c(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vsm3c(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); +---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index c32b967ed..48d6c78bd 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -6,95 +6,135 @@ [,c] ---- -vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn (vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn (vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn (vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn (vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn (vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn (vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn (vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn (vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn (vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn (vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn (vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn (vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn (vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn (vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn (vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn (vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn (vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn (vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn(vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn(vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn(vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn(vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn(vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn(vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn(vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn(vuint16mf4_t vs2, uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn(vuint16mf2_t vs2, uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn(vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn(vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn(vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn(vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn(vuint32mf2_t vs2, uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn(vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn(vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn(vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn(vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn(vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn (vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn (vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn (vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn (vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn (vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn (vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn (vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn (vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn (vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn (vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn (vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn (vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn (vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn (vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn (vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn (vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn (vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn (vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, + size_t vl); +vuint8mf4_t __riscv_vandn(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, + size_t vl); +vuint8mf2_t __riscv_vandn(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, + size_t vl); +vuint8m1_t __riscv_vandn(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vandn(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vandn(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vandn(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vandn(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn(vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn(vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, + size_t vl); +vuint16m2_t __riscv_vandn(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn(vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, + size_t vl); +vuint32m2_t __riscv_vandn(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, + size_t vl); +vuint32m4_t __riscv_vandn(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vandn(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vandn(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vandn(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl); ---- [[overloaded-]] @@ -102,139 +142,139 @@ vuint64m8_t __riscv_vandn (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl [,c] ---- -vuint8mf8_t __riscv_vbrev (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8 (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8 (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8 (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8 (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8 (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8 (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8 (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8 (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8 (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8 (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8 (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8 (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8 (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8 (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8 (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8 (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8 (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8 (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8 (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8 (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8 (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8 (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8 (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8(vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8 (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8 (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8 (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8 (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8 (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8 (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8 (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8 (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8 (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8 (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8 (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8 (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8 (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8 (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8 (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8 (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8 (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8 (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8 (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8 (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8 (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8 (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8 (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8 (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8 (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8 (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8 (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8 (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8 (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8 (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8 (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8 (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8 (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8 (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8 (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8 (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8 (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8 (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8 (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8 (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8 (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8 (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8(vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- [[overloaded-]] @@ -242,95 +282,95 @@ vuint64m8_t __riscv_vrev8 (vbool8_t vm, vuint64m8_t vs2, size_t vl); [,c] ---- -vuint8mf8_t __riscv_vclz (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz (vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz(vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz(vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vclz (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz (vbool8_t vm, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz(vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz(vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- [[overloaded-]] @@ -338,51 +378,51 @@ vuint64m8_t __riscv_vctz (vbool8_t vm, vuint64m8_t vs2, size_t vl); [,c] ---- -vuint8mf8_t __riscv_vcpop (vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop (vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop (vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop (vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop (vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop (vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop (vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop (vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop (vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop (vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop (vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop (vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop (vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop (vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop (vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop (vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop (vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop (vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop (vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop (vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop (vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop (vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop(vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop(vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop(vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop(vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop(vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop(vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop(vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop(vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop(vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop(vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop(vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop(vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop(vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop(vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop(vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop(vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop(vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop(vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop(vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop(vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop(vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop(vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vcpop (vbool64_t vm, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop (vbool32_t vm, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop (vbool16_t vm, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop (vbool8_t vm, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop (vbool4_t vm, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop (vbool2_t vm, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop (vbool1_t vm, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop (vbool64_t vm, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop (vbool32_t vm, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop (vbool16_t vm, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop (vbool8_t vm, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop (vbool4_t vm, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop (vbool2_t vm, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop (vbool64_t vm, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop (vbool32_t vm, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop (vbool16_t vm, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop (vbool8_t vm, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop (vbool4_t vm, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop (vbool64_t vm, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop (vbool32_t vm, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop (vbool16_t vm, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop (vbool8_t vm, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop(vbool64_t vm, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop(vbool32_t vm, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop(vbool16_t vm, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop(vbool8_t vm, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop(vbool4_t vm, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop(vbool2_t vm, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop(vbool1_t vm, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop(vbool64_t vm, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop(vbool32_t vm, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop(vbool16_t vm, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop(vbool8_t vm, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop(vbool4_t vm, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop(vbool2_t vm, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop(vbool64_t vm, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop(vbool32_t vm, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop(vbool16_t vm, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop(vbool8_t vm, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop(vbool4_t vm, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop(vbool64_t vm, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop(vbool32_t vm, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop(vbool16_t vm, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop(vbool8_t vm, vuint64m8_t vs2, size_t vl); ---- [[overloaded-]] @@ -390,183 +430,225 @@ vuint64m8_t __riscv_vcpop (vbool8_t vm, vuint64m8_t vs2, size_t vl); [,c] ---- -vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol (vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror (vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror (vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror (vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror (vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror (vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror (vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror (vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror (vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror (vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror (vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol(vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol(vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol(vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol(vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol(vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol(vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol(vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol(vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror(vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror(vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror(vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror(vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror(vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror(vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror(vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror(vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror(vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror(vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror (vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror (vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror (vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror (vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror (vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror (vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror (vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror (vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror (vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vrol(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vrol(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vrol(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint8m2_t __riscv_vror(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint8m4_t __riscv_vror(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint8m8_t __riscv_vror(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vror(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vror(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vror(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); ---- [[overloaded-]] @@ -574,65 +656,85 @@ vuint64m8_t __riscv_vror (vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl); [,c] ---- -vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll(vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll(vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll(vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll(vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll(vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll(vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll(vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll(vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll(vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll(vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll(vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll(vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll(vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll(vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll(vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll(vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll(vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll(vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll(vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll(vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll(vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll(vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll(vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll (vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll (vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll (vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll (vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll (vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll (vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll (vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll (vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll (vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll (vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll (vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll (vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll (vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll (vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll (vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll (vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vwsll(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vwsll(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vwsll(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vwsll(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl); ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc index 174233382..46e90836e 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc @@ -6,37 +6,53 @@ [,c] ---- -vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul (vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul(vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh(vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh(vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh(vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh(vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh(vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh(vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh(vuint64m8_t vs2, uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh (vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh (vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh (vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh (vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc index 3b38c6571..b355c3332 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc @@ -6,14 +6,19 @@ [,c] ---- -vuint32mf2_t __riscv_vghsh (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vghsh (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vghsh (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vghsh (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vghsh (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vgmul (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vgmul (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vgmul (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vgmul (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vgmul (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vghsh(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vghsh(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vghsh(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vghsh(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vghsh(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32mf2_t __riscv_vgmul(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc index 407f673d9..11cdeb958 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc @@ -6,44 +6,44 @@ [,c] ---- -vuint32mf2_t __riscv_vaesef_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- [[overloaded-]] @@ -51,44 +51,44 @@ vuint32m8_t __riscv_vaesem_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); [,c] ---- -vuint32mf2_t __riscv_vaesdf_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- [[overloaded-]] @@ -96,16 +96,21 @@ vuint32m8_t __riscv_vaesdm_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); [,c] ---- -vuint32mf2_t __riscv_vaeskf1 (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1 (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1 (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1 (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1 (vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2 (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2 (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2 (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2 (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf1(vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1(vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1(vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1(vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1(vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vaeskf2(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vaeskf2(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vaeskf2(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vaeskf2(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); ---- [[overloaded-]] @@ -113,18 +118,18 @@ vuint32m8_t __riscv_vaeskf2 (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_ [,c] ---- -vuint32mf2_t __riscv_vaesz (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesz(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc index 0c818e28d..f88389f32 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc @@ -6,15 +6,24 @@ [,c] ---- -vuint32mf2_t __riscv_vsha2ms (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ms (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ms (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ms (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ms (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ms (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ms (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ms (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ms (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2ms(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2ms(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2ms(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2ms(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2ms(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2ms(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2ms(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2ms(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); ---- [[overloaded-]] @@ -22,22 +31,40 @@ vuint64m8_t __riscv_vsha2ms (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, s [,c] ---- -vuint32mf2_t __riscv_vsha2ch (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ch (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ch (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ch (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ch (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ch (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ch (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ch (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ch (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vsha2cl (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2cl (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2cl (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2cl (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2cl (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2cl (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2cl (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2cl (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2cl (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2ch(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2ch(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2ch(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2ch(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2ch(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2ch(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2ch(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2ch(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint32mf2_t __riscv_vsha2cl(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2cl(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2cl(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2cl(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2cl(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2cl(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2cl(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2cl(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc index f5ad8d8fa..4d67aeef2 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc @@ -6,11 +6,11 @@ [,c] ---- -vuint32mf2_t __riscv_vsm4k (vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k (vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k (vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k (vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vsm4k(vuint32mf2_t vs2, size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k(vuint32m1_t vs2, size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k(vuint32m2_t vs2, size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k(vuint32m4_t vs2, size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k(vuint32m8_t vs2, size_t uimm, size_t vl); ---- [[overloaded-]] @@ -18,23 +18,23 @@ vuint32m8_t __riscv_vsm4k (vuint32m8_t vs2, size_t uimm, size_t vl); [,c] ---- -vuint32mf2_t __riscv_vsm4r_vv (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vv (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vv (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vv (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vv (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vv(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- diff --git a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc index ddf0b441c..e576b0ec4 100644 --- a/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc +++ b/auto-generated/vector-crypto/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc @@ -6,11 +6,11 @@ [,c] ---- -vuint32mf2_t __riscv_vsm3me (vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me (vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me (vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me (vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsm3me(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me(vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me(vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me(vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me(vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); ---- [[overloaded-]] @@ -18,9 +18,14 @@ vuint32m8_t __riscv_vsm3me (vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); [,c] ---- -vuint32mf2_t __riscv_vsm3c (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vsm3c(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vsm3c(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vsm3c(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vsm3c(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vsm3c(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.adoc new file mode 100644 index 000000000..1aa6eba69 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs.adoc @@ -0,0 +1,3245 @@ + +=== Zvbb - Vector Bit-manipulation used in Cryptography + +[[policy-variant-]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not + +[,c] +---- +vuint8mf8_t __riscv_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, + size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Reverse + +[,c] +---- +vuint8mf8_t __riscv_vbrev_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Count Bits + +[,c] +---- +vuint8mf8_t __riscv_vclz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +---- + +[[policy-variant-]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- +vuint8mf8_t __riscv_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +---- + +[[policy-variant-]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +---- + +=== Zvbc - Vector Carryless Multiplication + +[[policy-variant-]] +==== Vector Carryless Multiplication + +[,c] +---- +vuint64m1_t __riscv_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +---- + +=== Zvkg - Vector GCM/GMAC + +[[policy-variant-]] +==== Vector GCM/GMAC + +[,c] +---- +vuint32mf2_t __riscv_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +---- + +=== Zvkned - NIST Suite: Vector AES Block Cipher + +[[policy-variant-]] +==== Vector AES Encryption + +[,c] +---- +vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +---- + +[[policy-variant-]] +==== Vector AES Decryption + +[,c] +---- +vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +---- + +[[policy-variant-]] +==== Vector AES-128 Forward KeySchedule generation + +[,c] +---- +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); +---- + +[[policy-variant-]] +==== Vector AES round zero + +[,c] +---- +vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +---- + +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash + +[[policy-variant-]] +==== Vector SHA-2 message schedule + +[,c] +---- +vuint32mf2_t __riscv_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +---- + +[[policy-variant-]] +==== Vector SHA-2 two rounds of compression + +[,c] +---- +vuint32mf2_t __riscv_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +---- + +=== Zvksed - ShangMi Suite: SM4 Block Cipher + +[[policy-variant-]] +==== Vector SM4 KeyExpansion + +[,c] +---- +vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); +---- + +[[policy-variant-]] +==== Vector SM4 Rounds + +[,c] +---- +vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +---- + +=== Zvksh - ShangMi Suite: SM3 Secure Hash + +[[policy-variant-]] +==== Vector SM3 Message Expansion + +[,c] +---- +vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +---- + +[[policy-variant-]] +==== Vector SM3 Compression + +[,c] +---- +vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); +---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index 4433b14fb..1f4e9675c 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -6,185 +6,455 @@ [,c] ---- -vuint8mf8_t __riscv_vandn_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, + size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl); ---- [[policy-variant-]] @@ -192,273 +462,525 @@ vuint64m8_t __riscv_vandn_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t [,c] ---- -vuint8mf8_t __riscv_vbrev_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); ---- [[policy-variant-]] @@ -466,185 +988,323 @@ vuint64m8_t __riscv_vrev8_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t v [,c] ---- -vuint8mf8_t __riscv_vclz_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vclz_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); ---- [[policy-variant-]] @@ -652,97 +1312,181 @@ vuint64m8_t __riscv_vctz_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs [,c] ---- -vuint8mf8_t __riscv_vcpop_v_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); // masked functions -vuint8mf8_t __riscv_vcpop_v_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vcpop_v_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vcpop_v_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_v_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_v_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_v_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_v_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_v_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_v_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_v_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_v_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_v_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_v_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_v_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_v_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_v_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_v_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_v_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_v_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_v_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_v_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_v_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_v_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl); ---- [[policy-variant-]] @@ -750,361 +1494,833 @@ vuint64m8_t __riscv_vcpop_v_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t v [,c] ---- -vuint8mf8_t __riscv_vrol_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_vv_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_vx_u8mf8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_vv_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_vx_u8mf4_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_vv_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_vx_u8mf2_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_vv_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_vx_u8m1_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_vv_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_vx_u8m2_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_vv_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_vx_u8m4_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_vv_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_vx_u8m8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint16mf4_t __riscv_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint16mf2_t __riscv_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint32mf2_t __riscv_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl); ---- [[policy-variant-]] @@ -1112,127 +2328,301 @@ vuint64m8_t __riscv_vror_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t v [,c] ---- -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu (vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu (vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu (vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu (vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tu (vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tu (vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tu (vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tu (vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tu (vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tu (vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tu (vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tu (vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu (vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu (vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tu (vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tu (vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tu (vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tu (vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tu (vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tu (vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tu (vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tu (vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tu (vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tu (vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tu (vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tu (vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tu (vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tu (vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tu (vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tu (vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_vv_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_vx_u16m1_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_vv_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_vx_u16m2_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_vv_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_vx_u16m4_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_vv_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_vx_u16m8_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_vv_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_vx_u32m1_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_vv_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_vx_u32m2_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_vv_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_vx_u32m4_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_vv_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_vx_u32m8_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc index 559ba54e5..110d9a175 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc @@ -6,71 +6,183 @@ [,c] ---- -vuint64m1_t __riscv_vclmul_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vv_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_vx_u64m1_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vv_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_vx_u64m2_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vv_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_vx_u64m4_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vv_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_vx_u64m8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc index cf2c6a401..f17b3da41 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc @@ -6,14 +6,24 @@ [,c] ---- -vuint32mf2_t __riscv_vghsh_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vghsh_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vghsh_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vghsh_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vghsh_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vgmul_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vgmul_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vgmul_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vgmul_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vgmul_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vgmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vgmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vgmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vgmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc index 29d2463a1..cbed68346 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc @@ -6,44 +6,82 @@ [,c] ---- -vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); ---- [[policy-variant-]] @@ -51,44 +89,82 @@ vuint32m8_t __riscv_vaesem_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t [,c] ---- -vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); ---- [[policy-variant-]] @@ -96,16 +172,26 @@ vuint32m8_t __riscv_vaesdm_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t [,c] ---- -vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); ---- [[policy-variant-]] @@ -113,18 +199,32 @@ vuint32m8_t __riscv_vaeskf2_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t [,c] ---- -vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc index 2aec4fd51..114525658 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc @@ -6,15 +6,24 @@ [,c] ---- -vuint32mf2_t __riscv_vsha2ms_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ms_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ms_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ms_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ms_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ms_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ms_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ms_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); ---- [[policy-variant-]] @@ -22,22 +31,40 @@ vuint64m8_t __riscv_vsha2ms_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint6 [,c] ---- -vuint32mf2_t __riscv_vsha2ch_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ch_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ch_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ch_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ch_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ch_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ch_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ch_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ch_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vsha2cl_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2cl_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2cl_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2cl_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2cl_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2cl_vv_u64m1_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint64m1_t __riscv_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m2_t __riscv_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m4_t __riscv_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m8_t __riscv_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc index 95d0f470f..11c2d5b61 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc @@ -6,11 +6,16 @@ [,c] ---- -vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm4k_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm4k_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm4k_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm4k_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); ---- [[policy-variant-]] @@ -18,23 +23,42 @@ vuint32m8_t __riscv_vsm4k_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t u [,c] ---- -vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m1_t __riscv_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m2_t __riscv_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m4_t __riscv_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vsm4r_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc index 589216717..bd548f60a 100644 --- a/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc +++ b/auto-generated/vector-crypto/policy_funcs/intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc @@ -6,11 +6,16 @@ [,c] ---- -vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_vv_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_vv_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_vv_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m2_t __riscv_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m4_t __riscv_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m8_t __riscv_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); ---- [[policy-variant-]] @@ -18,9 +23,14 @@ vuint32m8_t __riscv_vsm3me_vv_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32 [,c] ---- -vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c_vi_u32m1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c_vi_u32m2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c_vi_u32m4_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c_vi_u32m8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t uimm, size_t vl); +vuint32m1_t __riscv_vsm3c_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t uimm, size_t vl); +vuint32m2_t __riscv_vsm3c_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t uimm, size_t vl); +vuint32m4_t __riscv_vsm3c_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t uimm, size_t vl); +vuint32m8_t __riscv_vsm3c_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t uimm, size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.adoc new file mode 100644 index 000000000..4909949a9 --- /dev/null +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs.adoc @@ -0,0 +1,2737 @@ + +=== Zvbb - Vector Bit-manipulation used in Cryptography + +[[policy-variant-overloaded]] +==== Vector Bit-manipulation used in Cryptography - Bitwise And-Not + +[,c] +---- +vuint8mf8_t __riscv_vandn_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl); +vuint8mf4_t __riscv_vandn_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl); +vuint8mf2_t __riscv_vandn_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl); +vuint8m1_t __riscv_vandn_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vandn_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl); +vuint8m2_t __riscv_vandn_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vandn_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl); +vuint8m4_t __riscv_vandn_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vandn_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl); +vuint8m8_t __riscv_vandn_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vandn_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl); +vuint16mf4_t __riscv_vandn_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, + size_t vl); +vuint16m2_t __riscv_vandn_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, + size_t vl); +vuint32m2_t __riscv_vandn_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, + size_t vl); +vuint32m4_t __riscv_vandn_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vandn_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vandn_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vandn_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vandn_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Reverse + +[,c] +---- +vuint8mf8_t __riscv_vbrev_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vbrev8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev8_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev8_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev8_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev8_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev8_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev8_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev8_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev8_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev8_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev8_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev8_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev8_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev8_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev8_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vrev8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vrev8_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vrev8_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vrev8_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vrev8_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vrev8_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vrev8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vrev8_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vrev8_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vrev8_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vrev8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vrev8_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vrev8_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vrev8_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vrev8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vbrev8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev8_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev8_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev8_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev8_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev8_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev8_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev8_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev8_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev8_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev8_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev8_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vrev8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vrev8_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vrev8_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vrev8_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vrev8_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vrev8_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vrev8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vrev8_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vrev8_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vrev8_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vrev8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vrev8_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vrev8_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vrev8_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vrev8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vbrev_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vbrev8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev8_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev8_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev8_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev8_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev8_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev8_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev8_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev8_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev8_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev8_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev8_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev8_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev8_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev8_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vrev8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vrev8_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vrev8_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vrev8_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vrev8_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vrev8_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vrev8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vrev8_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vrev8_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vrev8_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vrev8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vrev8_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vrev8_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vrev8_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vrev8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Count Bits + +[,c] +---- +vuint8mf8_t __riscv_vclz_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vclz_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vclz_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vclz_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vclz_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vclz_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vclz_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vclz_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vclz_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vclz_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vclz_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vclz_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vclz_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vclz_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vclz_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vclz_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vclz_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vclz_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vctz_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vctz_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vctz_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vctz_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vctz_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vctz_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vctz_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vctz_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vctz_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vctz_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vctz_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vctz_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vctz_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vctz_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vctz_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vctz_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vctz_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vctz_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vclz_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vclz_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vclz_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vclz_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vclz_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vclz_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vclz_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vclz_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vclz_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vclz_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vclz_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vclz_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vclz_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vclz_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vclz_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vclz_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vclz_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vctz_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vctz_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vctz_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vctz_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vctz_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vctz_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vctz_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vctz_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vctz_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vctz_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vctz_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vctz_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vctz_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vctz_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vctz_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vctz_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vctz_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vctz_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vclz_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vclz_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vclz_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vclz_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vclz_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vclz_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vclz_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vclz_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vclz_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vclz_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vclz_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vclz_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vclz_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vclz_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vclz_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vclz_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vclz_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vclz_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vctz_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vctz_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vctz_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vctz_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vctz_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vctz_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vctz_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vctz_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vctz_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vctz_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vctz_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vctz_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vctz_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vctz_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vctz_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vctz_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vctz_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vctz_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation - Vector Population Count + +[,c] +---- +vuint8mf8_t __riscv_vcpop_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vcpop_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vcpop_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vcpop_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vcpop_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vcpop_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vcpop_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vcpop_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vcpop_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vcpop_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vcpop_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vcpop_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vcpop_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vcpop_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vcpop_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vcpop_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vcpop_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vcpop_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vcpop_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vcpop_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vcpop_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vcpop_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vcpop_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vcpop_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vcpop_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vcpop_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vcpop_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vcpop_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vcpop_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vcpop_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vcpop_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vcpop_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vcpop_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vcpop_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vcpop_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vcpop_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vcpop_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vcpop_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vcpop_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vcpop_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vcpop_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vcpop_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vcpop_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vcpop_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Bit-manipulation used in Cryptography - Rotate + +[,c] +---- +vuint8mf8_t __riscv_vrol_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint8mf4_t __riscv_vrol_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint8mf2_t __riscv_vrol_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint8m1_t __riscv_vrol_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vrol_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vrol_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vrol_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vrol_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vrol_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vrol_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vrol_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vrol_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vrol_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vrol_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vrol_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vrol_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vrol_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vrol_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vrol_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vrol_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vrol_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vrol_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vrol_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vrol_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vrol_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vrol_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl); +vuint8mf8_t __riscv_vror_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint8mf4_t __riscv_vror_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint8mf2_t __riscv_vror_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint8m1_t __riscv_vror_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vror_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vror_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vror_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vror_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vror_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vror_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vror_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vror_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vror_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vror_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vror_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vror_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vror_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vror_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vror_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vror_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vror_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vror_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vror_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vror_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vror_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vror_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +// masked functions +vuint8mf8_t __riscv_vrol_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector Basic Bit-manipulation used - Widening Shift + +[,c] +---- +vuint16mf4_t __riscv_vwsll_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_tu(vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_tu(vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vwsll_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vwsll_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vwsll_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vwsll_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu(vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vwsll_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vwsll_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vwsll_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vwsll_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vwsll_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vwsll_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vwsll_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +// masked functions +vuint16mf4_t __riscv_vwsll_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +---- + +=== Zvbc - Vector Carryless Multiplication + +[[policy-variant-overloaded]] +==== Vector Carryless Multiplication + +[,c] +---- +vuint64m1_t __riscv_vclmul_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +// masked functions +vuint64m1_t __riscv_vclmul_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +---- + +=== Zvkg - Vector GCM/GMAC + +[[policy-variant-overloaded]] +==== Vector GCM/GMAC + +[,c] +---- +vuint32mf2_t __riscv_vghsh_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vghsh_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vghsh_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vghsh_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32mf2_t __riscv_vgmul_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +=== Zvkned - NIST Suite: Vector AES Block Cipher + +[[policy-variant-overloaded]] +==== Vector AES Encryption + +[,c] +---- +vuint32mf2_t __riscv_vaesef_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector AES Decryption + +[,c] +---- +vuint32mf2_t __riscv_vaesdf_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector AES-128 Forward KeySchedule generation + +[,c] +---- +vuint32mf2_t __riscv_vaeskf1_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vaeskf1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vaeskf1_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vaeskf1_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vaeskf1_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); +vuint32mf2_t __riscv_vaeskf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vaeskf2_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vaeskf2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vaeskf2_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vaeskf2_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector AES round zero + +[,c] +---- +vuint32mf2_t __riscv_vaesz_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +---- + +=== Zvknh - NIST Suite: Vector SHA-2 Secure Hash + +[[policy-variant-overloaded]] +==== Vector SHA-2 message schedule + +[,c] +---- +vuint32mf2_t __riscv_vsha2ms_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2ms_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2ms_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2ms_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2ms_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2ms_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2ms_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2ms_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector SHA-2 two rounds of compression + +[,c] +---- +vuint32mf2_t __riscv_vsha2ch_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2ch_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2ch_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2ch_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2ch_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2ch_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2ch_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2ch_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint32mf2_t __riscv_vsha2cl_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2cl_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2cl_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2cl_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2cl_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2cl_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2cl_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2cl_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +---- + +=== Zvksed - ShangMi Suite: SM4 Block Cipher + +[[policy-variant-overloaded]] +==== Vector SM4 KeyExpansion + +[,c] +---- +vuint32mf2_t __riscv_vsm4k_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vsm4k_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vsm4k_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vsm4k_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vsm4k_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector SM4 Rounds + +[,c] +---- +vuint32mf2_t __riscv_vsm4r_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +---- + +=== Zvksh - ShangMi Suite: SM3 Secure Hash + +[[policy-variant-overloaded]] +==== Vector SM3 Message Expansion + +[,c] +---- +vuint32mf2_t __riscv_vsm3me_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsm3me_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsm3me_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsm3me_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +---- + +[[policy-variant-overloaded]] +==== Vector SM3 Compression + +[,c] +---- +vuint32mf2_t __riscv_vsm3c_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vsm3c_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vsm3c_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vsm3c_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vsm3c_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); +---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc index a4d961b88..d10aee22d 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/00_zvbb_-_vector_bit-manipulation_used_in_cryptography.adoc @@ -6,185 +6,361 @@ [,c] ---- -vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tu (vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tu (vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tu (vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tu (vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tu (vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tu (vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tu (vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tu (vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tu (vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tu (vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tu (vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tu (vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tu (vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tu (vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tu (vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tu (vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tu (vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tu (vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vandn_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl); +vuint8mf4_t __riscv_vandn_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vandn_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl); +vuint8mf2_t __riscv_vandn_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vandn_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl); +vuint8m1_t __riscv_vandn_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vandn_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl); +vuint8m2_t __riscv_vandn_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vandn_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl); +vuint8m4_t __riscv_vandn_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vandn_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl); +vuint8m8_t __riscv_vandn_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vandn_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl); +vuint16mf4_t __riscv_vandn_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, + size_t vl); +vuint16mf2_t __riscv_vandn_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, + size_t vl); +vuint16m1_t __riscv_vandn_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vandn_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, + size_t vl); +vuint16m2_t __riscv_vandn_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vandn_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, + size_t vl); +vuint16m4_t __riscv_vandn_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vandn_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, + size_t vl); +vuint16m8_t __riscv_vandn_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vandn_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, + size_t vl); +vuint32mf2_t __riscv_vandn_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, + size_t vl); +vuint32m1_t __riscv_vandn_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vandn_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, + size_t vl); +vuint32m2_t __riscv_vandn_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vandn_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, + size_t vl); +vuint32m4_t __riscv_vandn_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vandn_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, + size_t vl); +vuint32m8_t __riscv_vandn_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vandn_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, + size_t vl); +vuint64m1_t __riscv_vandn_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vandn_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vandn_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vandn_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vandn_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vandn_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vandn_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vandn_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vandn_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vandn_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl); -vuint8mf4_t __riscv_vandn_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vandn_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl); -vuint8mf2_t __riscv_vandn_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vandn_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl); -vuint8m1_t __riscv_vandn_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vandn_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl); -vuint8m2_t __riscv_vandn_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vandn_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl); -vuint8m4_t __riscv_vandn_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vandn_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl); -vuint8m8_t __riscv_vandn_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vandn_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl); -vuint16mf4_t __riscv_vandn_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vandn_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl); -vuint16mf2_t __riscv_vandn_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vandn_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl); -vuint16m1_t __riscv_vandn_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vandn_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl); -vuint16m2_t __riscv_vandn_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vandn_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl); -vuint16m4_t __riscv_vandn_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vandn_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl); -vuint16m8_t __riscv_vandn_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vandn_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl); -vuint32mf2_t __riscv_vandn_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vandn_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl); -vuint32m1_t __riscv_vandn_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vandn_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl); -vuint32m2_t __riscv_vandn_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vandn_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl); -vuint32m4_t __riscv_vandn_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vandn_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl); -vuint32m8_t __riscv_vandn_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vandn_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl); -vuint64m1_t __riscv_vandn_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vandn_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vandn_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vandn_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vandn_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vandn_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint8mf8_t __riscv_vandn_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vandn_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vandn_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vandn_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl); +vuint8m1_t __riscv_vandn_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vandn_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl); +vuint8m2_t __riscv_vandn_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vandn_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl); +vuint8m4_t __riscv_vandn_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vandn_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl); +vuint8m8_t __riscv_vandn_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vandn_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vandn_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vandn_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl); +vuint16m1_t __riscv_vandn_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vandn_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl); +vuint16m2_t __riscv_vandn_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vandn_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl); +vuint16m4_t __riscv_vandn_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vandn_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl); +vuint16m8_t __riscv_vandn_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vandn_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vandn_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl); +vuint32m1_t __riscv_vandn_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vandn_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl); +vuint32m2_t __riscv_vandn_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vandn_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl); +vuint32m4_t __riscv_vandn_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vandn_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl); +vuint32m8_t __riscv_vandn_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vandn_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl); +vuint64m1_t __riscv_vandn_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vandn_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vandn_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vandn_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vandn_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vandn_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vandn_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vandn_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); ---- [[policy-variant-overloaded]] @@ -192,273 +368,471 @@ vuint64m8_t __riscv_vandn_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint [,c] ---- -vuint8mf8_t __riscv_vbrev_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vbrev8_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vbrev8_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vbrev8_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vbrev8_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vbrev8_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vbrev8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vbrev8_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vbrev8_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vbrev8_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vbrev8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vbrev8_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vbrev8_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vbrev8_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vbrev8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vbrev8_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vbrev8_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vbrev8_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vbrev8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vrev8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vrev8_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vrev8_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vrev8_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vrev8_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vrev8_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vrev8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vrev8_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vrev8_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vrev8_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vrev8_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vrev8_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vrev8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vrev8_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vrev8_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vrev8_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vrev8_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vrev8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vrev8_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vrev8_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vrev8_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vrev8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vbrev8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev8_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev8_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev8_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev8_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev8_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev8_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev8_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev8_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev8_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev8_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev8_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev8_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev8_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev8_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vrev8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vrev8_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vrev8_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vrev8_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vrev8_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vrev8_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vrev8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vrev8_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vrev8_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vrev8_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vrev8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vrev8_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vrev8_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vrev8_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vrev8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vbrev8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev8_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev8_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev8_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vbrev8_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vbrev8_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev8_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev8_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev8_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vbrev8_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev8_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev8_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev8_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev8_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev8_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vrev8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vrev8_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vrev8_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vrev8_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vrev8_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vrev8_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vrev8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vrev8_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vrev8_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vrev8_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vrev8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vrev8_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vrev8_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vrev8_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vrev8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); // masked functions -vuint8mf8_t __riscv_vbrev_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vbrev8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vbrev8_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vbrev8_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vbrev8_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vbrev8_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vbrev8_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vbrev8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vbrev8_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vbrev8_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vbrev8_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vbrev8_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vbrev8_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vbrev8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vbrev8_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vbrev8_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vbrev8_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vbrev8_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vbrev8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vbrev8_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vbrev8_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vbrev8_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vbrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vrev8_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vrev8_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vrev8_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vrev8_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vrev8_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vrev8_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vrev8_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vrev8_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vrev8_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vrev8_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vrev8_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vrev8_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vrev8_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vrev8_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vrev8_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vrev8_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vrev8_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vrev8_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vrev8_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vrev8_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vrev8_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vbrev_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vbrev8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vbrev8_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vbrev8_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vbrev8_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vbrev8_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vbrev8_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vbrev8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vbrev8_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vbrev8_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vbrev8_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vbrev8_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vbrev8_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vbrev8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vbrev8_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vbrev8_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vbrev8_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vbrev8_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vbrev8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vbrev8_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vbrev8_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vbrev8_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vbrev8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vrev8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vrev8_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vrev8_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vrev8_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vrev8_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vrev8_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vrev8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vrev8_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vrev8_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vrev8_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vrev8_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vrev8_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vrev8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vrev8_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vrev8_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vrev8_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vrev8_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vrev8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vrev8_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vrev8_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vrev8_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vrev8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); ---- [[policy-variant-overloaded]] @@ -466,185 +840,317 @@ vuint64m8_t __riscv_vrev8_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size [,c] ---- -vuint8mf8_t __riscv_vclz_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vclz_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vclz_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vclz_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vclz_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vclz_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vclz_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vclz_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vclz_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vclz_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vclz_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vclz_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vclz_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vclz_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vclz_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vclz_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vclz_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vclz_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vclz_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vclz_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vclz_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vclz_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vctz_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vctz_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vctz_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vctz_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vctz_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vctz_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vctz_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vctz_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vctz_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vctz_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vctz_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vctz_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vctz_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vctz_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vctz_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vctz_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vctz_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vctz_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vctz_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vctz_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vctz_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vctz_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vclz_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vclz_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vclz_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vclz_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vclz_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vclz_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vclz_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vclz_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vclz_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vclz_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vclz_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vclz_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vclz_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vclz_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vclz_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vclz_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vclz_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vclz_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vctz_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vctz_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vctz_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vctz_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vctz_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vctz_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vctz_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vctz_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vctz_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vctz_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vctz_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vctz_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vctz_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vctz_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vctz_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vctz_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vctz_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vctz_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); // masked functions -vuint8mf8_t __riscv_vclz_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vclz_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vclz_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vclz_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vclz_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vclz_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vclz_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vclz_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vclz_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vclz_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vclz_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vclz_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vclz_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vclz_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vclz_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vclz_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vclz_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vclz_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vctz_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vctz_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vctz_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vctz_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vctz_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vctz_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vctz_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vctz_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vctz_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vctz_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vctz_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vctz_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vctz_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vctz_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vctz_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vctz_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vctz_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vctz_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); // masked functions -vuint8mf8_t __riscv_vclz_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vclz_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vclz_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vclz_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vclz_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vclz_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vclz_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vclz_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vclz_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vclz_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vclz_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vclz_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vclz_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vclz_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vclz_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vclz_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vclz_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vclz_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vclz_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vclz_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vclz_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vclz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); -vuint8mf8_t __riscv_vctz_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vctz_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vctz_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vctz_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vctz_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vctz_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vctz_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vctz_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vctz_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vctz_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vctz_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vctz_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vctz_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vctz_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vctz_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vctz_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vctz_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vctz_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vctz_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vctz_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vctz_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vctz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vclz_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vclz_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vclz_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vclz_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vclz_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vclz_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vclz_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vclz_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vclz_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vclz_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vclz_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vclz_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vclz_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vclz_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vclz_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vclz_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vclz_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vclz_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vclz_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vclz_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vclz_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vclz_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); +vuint8mf8_t __riscv_vctz_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vctz_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vctz_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vctz_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vctz_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vctz_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vctz_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vctz_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vctz_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vctz_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vctz_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vctz_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vctz_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vctz_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vctz_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vctz_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vctz_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vctz_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vctz_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vctz_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vctz_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vctz_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); ---- [[policy-variant-overloaded]] @@ -652,97 +1158,163 @@ vuint64m8_t __riscv_vctz_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_ [,c] ---- -vuint8mf8_t __riscv_vcpop_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); +vuint8mf4_t __riscv_vcpop_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); +vuint8mf2_t __riscv_vcpop_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); +vuint8m1_t __riscv_vcpop_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vl); +vuint8m2_t __riscv_vcpop_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vl); +vuint8m4_t __riscv_vcpop_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vl); +vuint8m8_t __riscv_vcpop_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl); +vuint16mf4_t __riscv_vcpop_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); +vuint16mf2_t __riscv_vcpop_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); +vuint16m1_t __riscv_vcpop_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vl); +vuint16m2_t __riscv_vcpop_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vl); +vuint16m4_t __riscv_vcpop_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vl); +vuint16m8_t __riscv_vcpop_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vcpop_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vcpop_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vcpop_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vcpop_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vcpop_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint64m1_t __riscv_vcpop_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vl); +vuint64m2_t __riscv_vcpop_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vl); +vuint64m4_t __riscv_vcpop_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vl); +vuint64m8_t __riscv_vcpop_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl); // masked functions -vuint8mf8_t __riscv_vcpop_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vcpop_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vcpop_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vcpop_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vcpop_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vcpop_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vcpop_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vcpop_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vcpop_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vcpop_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vcpop_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vcpop_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vcpop_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vcpop_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vcpop_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); // masked functions -vuint8mf8_t __riscv_vcpop_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vcpop_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vcpop_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vcpop_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vcpop_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vcpop_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vcpop_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vcpop_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vcpop_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vcpop_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vcpop_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vcpop_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vcpop_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vcpop_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vcpop_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); // masked functions -vuint8mf8_t __riscv_vcpop_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl); -vuint8mf4_t __riscv_vcpop_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl); -vuint8mf2_t __riscv_vcpop_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl); -vuint8m1_t __riscv_vcpop_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl); -vuint8m2_t __riscv_vcpop_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl); -vuint8m4_t __riscv_vcpop_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl); -vuint8m8_t __riscv_vcpop_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl); -vuint16mf4_t __riscv_vcpop_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl); -vuint16mf2_t __riscv_vcpop_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl); -vuint16m1_t __riscv_vcpop_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl); -vuint16m2_t __riscv_vcpop_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl); -vuint16m4_t __riscv_vcpop_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl); -vuint16m8_t __riscv_vcpop_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vcpop_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vcpop_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vcpop_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vcpop_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vcpop_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint64m1_t __riscv_vcpop_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl); -vuint64m2_t __riscv_vcpop_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl); -vuint64m4_t __riscv_vcpop_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl); -vuint64m8_t __riscv_vcpop_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl); +vuint8mf8_t __riscv_vcpop_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl); +vuint8mf4_t __riscv_vcpop_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl); +vuint8mf2_t __riscv_vcpop_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl); +vuint8m1_t __riscv_vcpop_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl); +vuint8m2_t __riscv_vcpop_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl); +vuint8m4_t __riscv_vcpop_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl); +vuint8m8_t __riscv_vcpop_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl); +vuint16mf4_t __riscv_vcpop_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl); +vuint16mf2_t __riscv_vcpop_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl); +vuint16m1_t __riscv_vcpop_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl); +vuint16m2_t __riscv_vcpop_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl); +vuint16m4_t __riscv_vcpop_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl); +vuint16m8_t __riscv_vcpop_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl); +vuint32mf2_t __riscv_vcpop_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl); +vuint32m1_t __riscv_vcpop_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl); +vuint32m2_t __riscv_vcpop_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl); +vuint32m4_t __riscv_vcpop_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl); +vuint32m8_t __riscv_vcpop_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl); +vuint64m1_t __riscv_vcpop_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl); +vuint64m2_t __riscv_vcpop_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl); +vuint64m4_t __riscv_vcpop_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl); +vuint64m8_t __riscv_vcpop_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl); ---- [[policy-variant-overloaded]] @@ -750,361 +1322,713 @@ vuint64m8_t __riscv_vcpop_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size [,c] ---- -vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tu (vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tu (vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tu (vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tu (vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tu (vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tu (vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tu (vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tu (vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tu (vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tu (vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tu (vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tu (vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tu (vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tu (vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tu (vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tu (vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tu (vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tu (vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tu (vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tu (vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tu (vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tu (vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tu (vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tu (vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tu (vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tu (vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tu (vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tu (vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tu (vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tu (vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vrol_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint8mf4_t __riscv_vrol_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vrol_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint8mf2_t __riscv_vrol_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vrol_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint8m1_t __riscv_vrol_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vrol_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vrol_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vrol_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vrol_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vrol_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vrol_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vrol_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vrol_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vrol_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vrol_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vrol_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vrol_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vrol_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vrol_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vrol_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vrol_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vrol_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vrol_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vrol_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vrol_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vrol_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vrol_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vrol_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vrol_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vrol_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vrol_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vrol_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vrol_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vrol_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vrol_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vrol_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vrol_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vrol_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vrol_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl); +vuint8mf8_t __riscv_vror_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint8mf8_t __riscv_vror_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint8mf4_t __riscv_vror_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint8mf4_t __riscv_vror_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint8mf2_t __riscv_vror_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint8mf2_t __riscv_vror_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint8m1_t __riscv_vror_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint8m1_t __riscv_vror_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint8m2_t __riscv_vror_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint8m2_t __riscv_vror_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint8m4_t __riscv_vror_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint8m4_t __riscv_vror_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint8m8_t __riscv_vror_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl); +vuint8m8_t __riscv_vror_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl); +vuint16mf4_t __riscv_vror_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vror_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vror_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint16m1_t __riscv_vror_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vror_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint16m2_t __riscv_vror_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vror_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint16m4_t __riscv_vror_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vror_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl); +vuint16m8_t __riscv_vror_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vror_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vror_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m1_t __riscv_vror_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vror_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m2_t __riscv_vror_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vror_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m4_t __riscv_vror_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vror_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32m8_t __riscv_vror_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vror_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vror_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vror_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vror_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vror_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vror_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vror_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vror_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tum (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tum (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tum (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tum (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tum (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tum (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tum (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tum (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tum (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tum (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tum (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tum (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tum (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tum (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tum (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tum (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tum (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tum (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_tumu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_tumu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_tumu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_tumu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_tumu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_tumu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_tumu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_tumu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_tumu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_tumu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_tumu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_tumu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_tumu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_tumu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_tumu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_tumu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_tumu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_tumu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); // masked functions -vuint8mf8_t __riscv_vrol_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vrol_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vrol_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vrol_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vrol_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vrol_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vrol_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vrol_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vrol_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vrol_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vrol_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vrol_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vrol_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vrol_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vrol_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vrol_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vrol_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vrol_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vrol_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vrol_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vrol_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vrol_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vrol_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vrol_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vrol_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vrol_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vrol_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vrol_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vrol_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vrol_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vrol_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vrol_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vrol_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vrol_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vrol_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vrol_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vrol_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vrol_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vrol_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vrol_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vrol_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vrol_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vrol_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vrol_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); -vuint8mf8_t __riscv_vror_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint8mf8_t __riscv_vror_mu (vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint8mf4_t __riscv_vror_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint8mf4_t __riscv_vror_mu (vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint8mf2_t __riscv_vror_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint8mf2_t __riscv_vror_mu (vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint8m1_t __riscv_vror_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint8m1_t __riscv_vror_mu (vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint8m2_t __riscv_vror_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint8m2_t __riscv_vror_mu (vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint8m4_t __riscv_vror_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint8m4_t __riscv_vror_mu (vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint8m8_t __riscv_vror_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl); -vuint8m8_t __riscv_vror_mu (vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl); -vuint16mf4_t __riscv_vror_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint16mf4_t __riscv_vror_mu (vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vror_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint16mf2_t __riscv_vror_mu (vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vror_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint16m1_t __riscv_vror_mu (vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vror_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint16m2_t __riscv_vror_mu (vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vror_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint16m4_t __riscv_vror_mu (vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vror_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl); -vuint16m8_t __riscv_vror_mu (vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vror_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32mf2_t __riscv_vror_mu (vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vror_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m1_t __riscv_vror_mu (vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vror_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m2_t __riscv_vror_mu (vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vror_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m4_t __riscv_vror_mu (vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vror_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32m8_t __riscv_vror_mu (vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vror_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vror_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vror_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vror_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vror_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vror_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl); +vuint8mf8_t __riscv_vrol_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vrol_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vrol_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vrol_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vrol_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vrol_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vrol_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vrol_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vrol_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vrol_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vrol_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vrol_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vrol_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vrol_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vrol_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vrol_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vrol_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vrol_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vrol_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vrol_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vrol_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vrol_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vrol_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vrol_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vrol_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vrol_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vrol_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vrol_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vrol_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vrol_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vrol_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vrol_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vrol_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vrol_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vrol_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vrol_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vrol_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vrol_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vrol_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); +vuint8mf8_t __riscv_vror_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint8mf8_t __riscv_vror_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint8mf4_t __riscv_vror_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint8mf4_t __riscv_vror_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint8mf2_t __riscv_vror_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint8mf2_t __riscv_vror_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint8m1_t __riscv_vror_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint8m1_t __riscv_vror_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint8m2_t __riscv_vror_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint8m2_t __riscv_vror_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint8m4_t __riscv_vror_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint8m4_t __riscv_vror_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint8m8_t __riscv_vror_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl); +vuint8m8_t __riscv_vror_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl); +vuint16mf4_t __riscv_vror_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint16mf4_t __riscv_vror_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vror_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint16mf2_t __riscv_vror_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vror_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint16m1_t __riscv_vror_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vror_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint16m2_t __riscv_vror_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vror_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint16m4_t __riscv_vror_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vror_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl); +vuint16m8_t __riscv_vror_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vror_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32mf2_t __riscv_vror_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vror_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint32m1_t __riscv_vror_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vror_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint32m2_t __riscv_vror_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vror_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint32m4_t __riscv_vror_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vror_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl); +vuint32m8_t __riscv_vror_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vror_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vror_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vror_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vror_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vror_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vror_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vror_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vror_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl); ---- [[policy-variant-overloaded]] @@ -1112,127 +2036,247 @@ vuint64m8_t __riscv_vror_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_ [,c] ---- -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tu (vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tu (vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tu (vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tu (vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tu (vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tu (vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tu (vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tu (vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tu (vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tu (vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tu (vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tu (vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tu (vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tu (vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tu (vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl); +vuint16mf4_t __riscv_vwsll_tu(vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl); +vuint16mf2_t __riscv_vwsll_tu(vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl); +vuint16m1_t __riscv_vwsll_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl); +vuint16m1_t __riscv_vwsll_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl); +vuint16m2_t __riscv_vwsll_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl); +vuint16m2_t __riscv_vwsll_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl); +vuint16m4_t __riscv_vwsll_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl); +vuint16m4_t __riscv_vwsll_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl); +vuint16m8_t __riscv_vwsll_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl); +vuint16m8_t __riscv_vwsll_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl); +vuint32mf2_t __riscv_vwsll_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tu(vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, + size_t vl); +vuint32m1_t __riscv_vwsll_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl); +vuint32m1_t __riscv_vwsll_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl); +vuint32m2_t __riscv_vwsll_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vwsll_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl); +vuint32m4_t __riscv_vwsll_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vwsll_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl); +vuint32m8_t __riscv_vwsll_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vwsll_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl); +vuint64m1_t __riscv_vwsll_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl); +vuint64m1_t __riscv_vwsll_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl); +vuint64m2_t __riscv_vwsll_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vwsll_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl); +vuint64m4_t __riscv_vwsll_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vwsll_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl); +vuint64m8_t __riscv_vwsll_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vwsll_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tum (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tum (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tum (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tum (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tum (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tum (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tum (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tum (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tum (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tum (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tum (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tum (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tum (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tum (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tum (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_tumu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_tumu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_tumu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_tumu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_tumu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_tumu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_tumu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_tumu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_tumu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_tumu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_tumu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_tumu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_tumu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_tumu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_tumu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); // masked functions -vuint16mf4_t __riscv_vwsll_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl); -vuint16mf4_t __riscv_vwsll_mu (vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl); -vuint16mf2_t __riscv_vwsll_mu (vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl); -vuint16m1_t __riscv_vwsll_mu (vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl); -vuint16m2_t __riscv_vwsll_mu (vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl); -vuint16m4_t __riscv_vwsll_mu (vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl); -vuint16m8_t __riscv_vwsll_mu (vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl); -vuint32mf2_t __riscv_vwsll_mu (vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vwsll_mu (vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl); -vuint32m2_t __riscv_vwsll_mu (vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl); -vuint32m4_t __riscv_vwsll_mu (vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl); -vuint32m8_t __riscv_vwsll_mu (vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint64m1_t __riscv_vwsll_mu (vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint64m2_t __riscv_vwsll_mu (vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint64m4_t __riscv_vwsll_mu (vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint64m8_t __riscv_vwsll_mu (vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl); +vuint16mf4_t __riscv_vwsll_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl); +vuint16mf2_t __riscv_vwsll_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl); +vuint16m1_t __riscv_vwsll_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl); +vuint16m2_t __riscv_vwsll_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl); +vuint16m4_t __riscv_vwsll_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl); +vuint16m8_t __riscv_vwsll_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl); +vuint32mf2_t __riscv_vwsll_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vwsll_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl); +vuint32m2_t __riscv_vwsll_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl); +vuint32m4_t __riscv_vwsll_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl); +vuint32m8_t __riscv_vwsll_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint64m1_t __riscv_vwsll_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl); +vuint64m2_t __riscv_vwsll_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl); +vuint64m4_t __riscv_vwsll_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl); +vuint64m8_t __riscv_vwsll_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc index 98ab2a820..7e561c15e 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/01_zvbc_-_vector_carryless_multiplication.adoc @@ -6,71 +6,135 @@ [,c] ---- -vuint64m1_t __riscv_vclmul_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tu (vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tu (vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tu (vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tu (vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmul_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmul_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmul_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmul_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmul_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmul_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmul_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m1_t __riscv_vclmulh_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m2_t __riscv_vclmulh_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m4_t __riscv_vclmulh_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint64m8_t __riscv_vclmulh_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tum (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tum (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tum (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tum (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_tumu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_tumu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_tumu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_tumu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); // masked functions -vuint64m1_t __riscv_vclmul_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmul_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmul_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmul_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmul_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmul_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmul_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmul_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); -vuint64m1_t __riscv_vclmulh_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m1_t __riscv_vclmulh_mu (vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl); -vuint64m2_t __riscv_vclmulh_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m2_t __riscv_vclmulh_mu (vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl); -vuint64m4_t __riscv_vclmulh_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m4_t __riscv_vclmulh_mu (vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl); -vuint64m8_t __riscv_vclmulh_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint64m8_t __riscv_vclmulh_mu (vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmul_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmul_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmul_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmul_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmul_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl); +vuint64m1_t __riscv_vclmulh_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl); +vuint64m2_t __riscv_vclmulh_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl); +vuint64m4_t __riscv_vclmulh_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl); +vuint64m8_t __riscv_vclmulh_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc index 36e253baf..5073e6a96 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/02_zvkg_-_vector_gcm_gmac.adoc @@ -6,14 +6,19 @@ [,c] ---- -vuint32mf2_t __riscv_vghsh_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vghsh_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vghsh_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vghsh_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vghsh_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vgmul_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vgmul_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vgmul_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vgmul_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vgmul_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vghsh_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vghsh_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vghsh_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vghsh_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vghsh_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint32mf2_t __riscv_vgmul_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vgmul_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vgmul_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vgmul_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vgmul_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc index 46b66b36f..6adfda0e4 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/03_zvkned_-_nist_suite:_vector_aes_block_cipher.adoc @@ -6,44 +6,44 @@ [,c] ---- -vuint32mf2_t __riscv_vaesef_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesef_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesef_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesef_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesef_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesef_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesem_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesem_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesem_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesem_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesem_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesef_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesef_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesef_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesef_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesef_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesem_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesem_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesem_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesem_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesem_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- [[policy-variant-overloaded]] @@ -51,44 +51,44 @@ vuint32m8_t __riscv_vaesem_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); [,c] ---- -vuint32mf2_t __riscv_vaesdf_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdf_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdf_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdf_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdf_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdf_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vaesdm_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vaesdm_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesdm_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vaesdm_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesdm_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdf_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdf_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdf_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdf_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdf_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesdm_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vaesdm_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesdm_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vaesdm_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesdm_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- [[policy-variant-overloaded]] @@ -96,16 +96,26 @@ vuint32m8_t __riscv_vaesdm_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); [,c] ---- -vuint32mf2_t __riscv_vaeskf1_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf1_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf1_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf1_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf1_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); -vuint32mf2_t __riscv_vaeskf2_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vaeskf2_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vaeskf2_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vaeskf2_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vaeskf1_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vaeskf1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vaeskf1_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vaeskf1_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vaeskf1_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); +vuint32mf2_t __riscv_vaeskf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vaeskf2_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vaeskf2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vaeskf2_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vaeskf2_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); ---- [[policy-variant-overloaded]] @@ -113,18 +123,18 @@ vuint32m8_t __riscv_vaeskf2_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, si [,c] ---- -vuint32mf2_t __riscv_vaesz_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vaesz_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vaesz_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vaesz_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vaesz_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32mf2_t __riscv_vaesz_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vaesz_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vaesz_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vaesz_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vaesz_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc index 118223db5..4185db4b7 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/04_zvknh_-_nist_suite:_vector_sha-2_secure_hash.adoc @@ -6,15 +6,24 @@ [,c] ---- -vuint32mf2_t __riscv_vsha2ms_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ms_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ms_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ms_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ms_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ms_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ms_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ms_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ms_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2ms_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ms_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2ms_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2ms_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2ms_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2ms_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2ms_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2ms_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2ms_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); ---- [[policy-variant-overloaded]] @@ -22,22 +31,40 @@ vuint64m8_t __riscv_vsha2ms_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1 [,c] ---- -vuint32mf2_t __riscv_vsha2ch_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2ch_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2ch_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2ch_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2ch_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2ch_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2ch_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2ch_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2ch_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); -vuint32mf2_t __riscv_vsha2cl_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsha2cl_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsha2cl_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsha2cl_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsha2cl_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); -vuint64m1_t __riscv_vsha2cl_tu (vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl); -vuint64m2_t __riscv_vsha2cl_tu (vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl); -vuint64m4_t __riscv_vsha2cl_tu (vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl); -vuint64m8_t __riscv_vsha2cl_tu (vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsha2ch_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2ch_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2ch_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2ch_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2ch_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2ch_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2ch_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2ch_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2ch_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); +vuint32mf2_t __riscv_vsha2cl_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsha2cl_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsha2cl_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsha2cl_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsha2cl_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); +vuint64m1_t __riscv_vsha2cl_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl); +vuint64m2_t __riscv_vsha2cl_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl); +vuint64m4_t __riscv_vsha2cl_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl); +vuint64m8_t __riscv_vsha2cl_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc index 304925935..83031bc0c 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/05_zvksed_-_shangmi_suite:_sm4_block_cipher.adoc @@ -6,11 +6,16 @@ [,c] ---- -vuint32mf2_t __riscv_vsm4k_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm4k_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm4k_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm4k_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vsm4k_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vsm4k_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vsm4k_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vsm4k_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vsm4k_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); ---- [[policy-variant-overloaded]] @@ -18,23 +23,23 @@ vuint32m8_t __riscv_vsm4k_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size [,c] ---- -vuint32mf2_t __riscv_vsm4r_vv_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32mf2_t __riscv_vsm4r_vs_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vv_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m1_t __riscv_vsm4r_vs_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m1_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m1_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m1_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vv_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m2_t __riscv_vsm4r_vs_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m2_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m2_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vv_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m4_t __riscv_vsm4r_vs_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vs_tu (vuint32m8_t vd, vuint32m4_t vs2, size_t vl); -vuint32m8_t __riscv_vsm4r_vv_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vv_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32mf2_t __riscv_vsm4r_vs_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vv_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m1_t __riscv_vsm4r_vs_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vv_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m2_t __riscv_vsm4r_vs_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vv_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m4_t __riscv_vsm4r_vs_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vs_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl); +vuint32m8_t __riscv_vsm4r_vv_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl); ---- diff --git a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc index b907f2879..fb575f001 100644 --- a/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc +++ b/auto-generated/vector-crypto/policy_funcs/overloaded_intrinsic_funcs/06_zvksh_-_shangmi_suite:_sm3_secure_hash.adoc @@ -6,11 +6,16 @@ [,c] ---- -vuint32mf2_t __riscv_vsm3me_tu (vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl); -vuint32m1_t __riscv_vsm3me_tu (vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl); -vuint32m2_t __riscv_vsm3me_tu (vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl); -vuint32m4_t __riscv_vsm3me_tu (vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl); -vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl); +vuint32mf2_t __riscv_vsm3me_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl); +vuint32m1_t __riscv_vsm3me_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl); +vuint32m2_t __riscv_vsm3me_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl); +vuint32m4_t __riscv_vsm3me_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl); +vuint32m8_t __riscv_vsm3me_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl); ---- [[policy-variant-overloaded]] @@ -18,9 +23,14 @@ vuint32m8_t __riscv_vsm3me_tu (vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, [,c] ---- -vuint32mf2_t __riscv_vsm3c_tu (vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, size_t vl); -vuint32m1_t __riscv_vsm3c_tu (vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, size_t vl); -vuint32m2_t __riscv_vsm3c_tu (vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, size_t vl); -vuint32m4_t __riscv_vsm3c_tu (vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, size_t vl); -vuint32m8_t __riscv_vsm3c_tu (vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, size_t vl); +vuint32mf2_t __riscv_vsm3c_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t uimm, + size_t vl); +vuint32m1_t __riscv_vsm3c_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t uimm, + size_t vl); +vuint32m2_t __riscv_vsm3c_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t uimm, + size_t vl); +vuint32m4_t __riscv_vsm3c_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t uimm, + size_t vl); +vuint32m8_t __riscv_vsm3c_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t uimm, + size_t vl); ---- From 9977f3bb0a122d9adb8ffaf8f152b7bd672002ff Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 13 Jun 2024 19:56:22 -0700 Subject: [PATCH 096/151] [Auto-gen] Update vector crypto tests under ../auto-generated. (make git-commit-autogen-vector-crypto-test) --- .../vector-crypto/api-testing/vaesdf.c | 47 +- .../vector-crypto/api-testing/vaesdm.c | 47 +- .../vector-crypto/api-testing/vaesef.c | 47 +- .../vector-crypto/api-testing/vaesem.c | 47 +- .../vector-crypto/api-testing/vaeskf1.c | 2 +- .../vector-crypto/api-testing/vaeskf2.c | 5 +- .../vector-crypto/api-testing/vaesz.c | 44 +- .../vector-crypto/api-testing/vandn.c | 143 +++-- .../vector-crypto/api-testing/vbrev.c | 2 +- .../vector-crypto/api-testing/vbrev8.c | 2 +- .../vector-crypto/api-testing/vclmul.c | 26 +- .../vector-crypto/api-testing/vclmulh.c | 26 +- .../vector-crypto/api-testing/vclz.c | 2 +- .../vector-crypto/api-testing/vcpop.c | 2 +- .../vector-crypto/api-testing/vctz.c | 2 +- .../vector-crypto/api-testing/vghsh.c | 17 +- .../vector-crypto/api-testing/vgmul.c | 5 +- .../vector-crypto/api-testing/vrev8.c | 2 +- .../vector-crypto/api-testing/vrol.c | 143 +++-- .../vector-crypto/api-testing/vror.c | 143 +++-- .../vector-crypto/api-testing/vsha2ch.c | 29 +- .../vector-crypto/api-testing/vsha2cl.c | 29 +- .../vector-crypto/api-testing/vsha2ms.c | 29 +- .../vector-crypto/api-testing/vsm3c.c | 5 +- .../vector-crypto/api-testing/vsm3me.c | 5 +- .../vector-crypto/api-testing/vsm4k.c | 2 +- .../vector-crypto/api-testing/vsm4r.c | 47 +- .../vector-crypto/api-testing/vwsll.c | 95 ++- .../overloaded-api-testing/vaesdf.c | 47 +- .../overloaded-api-testing/vaesdm.c | 47 +- .../overloaded-api-testing/vaesef.c | 47 +- .../overloaded-api-testing/vaesem.c | 47 +- .../overloaded-api-testing/vaeskf1.c | 2 +- .../overloaded-api-testing/vaeskf2.c | 5 +- .../overloaded-api-testing/vaesz.c | 44 +- .../overloaded-api-testing/vandn.c | 143 +++-- .../overloaded-api-testing/vbrev.c | 2 +- .../overloaded-api-testing/vbrev8.c | 2 +- .../overloaded-api-testing/vclmul.c | 26 +- .../overloaded-api-testing/vclmulh.c | 26 +- .../overloaded-api-testing/vclz.c | 2 +- .../overloaded-api-testing/vcpop.c | 2 +- .../overloaded-api-testing/vctz.c | 2 +- .../overloaded-api-testing/vghsh.c | 17 +- .../overloaded-api-testing/vgmul.c | 5 +- .../overloaded-api-testing/vrev8.c | 2 +- .../overloaded-api-testing/vrol.c | 143 +++-- .../overloaded-api-testing/vror.c | 143 +++-- .../overloaded-api-testing/vsha2ch.c | 29 +- .../overloaded-api-testing/vsha2cl.c | 29 +- .../overloaded-api-testing/vsha2ms.c | 29 +- .../overloaded-api-testing/vsm3c.c | 5 +- .../overloaded-api-testing/vsm3me.c | 5 +- .../overloaded-api-testing/vsm4k.c | 2 +- .../overloaded-api-testing/vsm4r.c | 47 +- .../overloaded-api-testing/vwsll.c | 95 ++- .../policy_funcs/api-testing/vaesdf.c | 59 +- .../policy_funcs/api-testing/vaesdm.c | 59 +- .../policy_funcs/api-testing/vaesef.c | 59 +- .../policy_funcs/api-testing/vaesem.c | 59 +- .../policy_funcs/api-testing/vaeskf1.c | 17 +- .../policy_funcs/api-testing/vaeskf2.c | 17 +- .../policy_funcs/api-testing/vaesz.c | 44 +- .../policy_funcs/api-testing/vandn.c | 587 ++++++++++++------ .../policy_funcs/api-testing/vbrev.c | 209 ++++--- .../policy_funcs/api-testing/vbrev8.c | 209 ++++--- .../policy_funcs/api-testing/vclmul.c | 114 +++- .../policy_funcs/api-testing/vclmulh.c | 118 +++- .../policy_funcs/api-testing/vclz.c | 209 ++++--- .../policy_funcs/api-testing/vcpop.c | 209 ++++--- .../policy_funcs/api-testing/vctz.c | 209 ++++--- .../policy_funcs/api-testing/vghsh.c | 17 +- .../policy_funcs/api-testing/vgmul.c | 5 +- .../policy_funcs/api-testing/vrev8.c | 209 ++++--- .../policy_funcs/api-testing/vrol.c | 563 +++++++++++------ .../policy_funcs/api-testing/vror.c | 563 +++++++++++------ .../policy_funcs/api-testing/vsha2ch.c | 29 +- .../policy_funcs/api-testing/vsha2cl.c | 29 +- .../policy_funcs/api-testing/vsha2ms.c | 29 +- .../policy_funcs/api-testing/vsm3c.c | 5 +- .../policy_funcs/api-testing/vsm3me.c | 17 +- .../policy_funcs/api-testing/vsm4k.c | 5 +- .../policy_funcs/api-testing/vsm4r.c | 47 +- .../policy_funcs/api-testing/vwsll.c | 399 ++++++++---- .../policy_funcs/llvm-api-tests/vaesdf.c | 57 +- .../policy_funcs/llvm-api-tests/vaesdm.c | 57 +- .../policy_funcs/llvm-api-tests/vaesef.c | 57 +- .../policy_funcs/llvm-api-tests/vaesem.c | 57 +- .../policy_funcs/llvm-api-tests/vaeskf1.c | 15 +- .../policy_funcs/llvm-api-tests/vaeskf2.c | 15 +- .../policy_funcs/llvm-api-tests/vaesz.c | 42 +- .../policy_funcs/llvm-api-tests/vandn.c | 585 +++++++++++------ .../policy_funcs/llvm-api-tests/vbrev.c | 207 ++++-- .../policy_funcs/llvm-api-tests/vbrev8.c | 207 ++++-- .../policy_funcs/llvm-api-tests/vclmul.c | 112 +++- .../policy_funcs/llvm-api-tests/vclmulh.c | 116 +++- .../policy_funcs/llvm-api-tests/vclz.c | 207 ++++-- .../policy_funcs/llvm-api-tests/vcpop.c | 207 ++++-- .../policy_funcs/llvm-api-tests/vctz.c | 207 ++++-- .../policy_funcs/llvm-api-tests/vghsh.c | 15 +- .../policy_funcs/llvm-api-tests/vgmul.c | 3 +- .../policy_funcs/llvm-api-tests/vrev8.c | 207 ++++-- .../policy_funcs/llvm-api-tests/vrol.c | 561 +++++++++++------ .../policy_funcs/llvm-api-tests/vror.c | 561 +++++++++++------ .../policy_funcs/llvm-api-tests/vsha2ch.c | 27 +- .../policy_funcs/llvm-api-tests/vsha2cl.c | 27 +- .../policy_funcs/llvm-api-tests/vsha2ms.c | 27 +- .../policy_funcs/llvm-api-tests/vsm3c.c | 3 +- .../policy_funcs/llvm-api-tests/vsm3me.c | 15 +- .../policy_funcs/llvm-api-tests/vsm4k.c | 3 +- .../policy_funcs/llvm-api-tests/vsm4r.c | 45 +- .../policy_funcs/llvm-api-tests/vwsll.c | 397 ++++++++---- 112 files changed, 6852 insertions(+), 3272 deletions(-) diff --git a/auto-generated/vector-crypto/api-testing/vaesdf.c b/auto-generated/vector-crypto/api-testing/vaesdf.c index e5b912a42..e12a3719a 100644 --- a/auto-generated/vector-crypto/api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/api-testing/vaesdf.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m8(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m8(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m4_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesdm.c b/auto-generated/vector-crypto/api-testing/vaesdm.c index 903beeddf..bac6d0a06 100644 --- a/auto-generated/vector-crypto/api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/api-testing/vaesdm.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m8(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m8(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m4_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesef.c b/auto-generated/vector-crypto/api-testing/vaesef.c index 375059d4d..72255ee9e 100644 --- a/auto-generated/vector-crypto/api-testing/vaesef.c +++ b/auto-generated/vector-crypto/api-testing/vaesef.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m8(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m8(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m4_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesem.c b/auto-generated/vector-crypto/api-testing/vaesem.c index 76aa9d61b..ce7186cb4 100644 --- a/auto-generated/vector-crypto/api-testing/vaesem.c +++ b/auto-generated/vector-crypto/api-testing/vaesem.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m8(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m8(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m4_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaeskf1.c b/auto-generated/vector-crypto/api-testing/vaeskf1.c index a6f2fbd00..8bb210aa3 100644 --- a/auto-generated/vector-crypto/api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/api-testing/vaeskf1.c @@ -1,5 +1,5 @@ -#include #include +#include vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1_vi_u32mf2(vs2, 0, vl); diff --git a/auto-generated/vector-crypto/api-testing/vaeskf2.c b/auto-generated/vector-crypto/api-testing/vaeskf2.c index 060b9874f..5d26a5400 100644 --- a/auto-generated/vector-crypto/api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/api-testing/vaeskf2.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32mf2(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vaesz.c b/auto-generated/vector-crypto/api-testing/vaesz.c index f3c6760ce..d344914e8 100644 --- a/auto-generated/vector-crypto/api-testing/vaesz.c +++ b/auto-generated/vector-crypto/api-testing/vaesz.c @@ -1,58 +1,72 @@ -#include #include +#include -vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m8(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m8(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m8(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m4_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vandn.c b/auto-generated/vector-crypto/api-testing/vandn.c index 7400c8a58..0c7002f3a 100644 --- a/auto-generated/vector-crypto/api-testing/vandn.c +++ b/auto-generated/vector-crypto/api-testing/vandn.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf8(vs2, vs1, vl); @@ -57,7 +57,8 @@ vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m8(vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf4(vs2, vs1, vl); } @@ -65,7 +66,8 @@ vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16mf4(vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf2(vs2, vs1, vl); } @@ -105,7 +107,8 @@ vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8(vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32mf2(vs2, vs1, vl); } @@ -177,178 +180,222 @@ vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vandn_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vandn_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vandn_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vandn_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vandn_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vandn_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vandn_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vandn_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vandn_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vandn_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vandn_vx_u64m8_m(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vbrev.c b/auto-generated/vector-crypto/api-testing/vbrev.c index fd22f6114..c59fa4d04 100644 --- a/auto-generated/vector-crypto/api-testing/vbrev.c +++ b/auto-generated/vector-crypto/api-testing/vbrev.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf8(vs2, vl); diff --git a/auto-generated/vector-crypto/api-testing/vbrev8.c b/auto-generated/vector-crypto/api-testing/vbrev8.c index 6d29c2665..ba8c19cfc 100644 --- a/auto-generated/vector-crypto/api-testing/vbrev8.c +++ b/auto-generated/vector-crypto/api-testing/vbrev8.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8(vs2, vl); diff --git a/auto-generated/vector-crypto/api-testing/vclmul.c b/auto-generated/vector-crypto/api-testing/vclmul.c index 3fd21fa7f..b04735718 100644 --- a/auto-generated/vector-crypto/api-testing/vclmul.c +++ b/auto-generated/vector-crypto/api-testing/vclmul.c @@ -1,5 +1,5 @@ -#include #include +#include vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m1(vs2, vs1, vl); @@ -33,34 +33,42 @@ vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8(vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m8_m(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vclmulh.c b/auto-generated/vector-crypto/api-testing/vclmulh.c index a4c69311e..9a6abaea4 100644 --- a/auto-generated/vector-crypto/api-testing/vclmulh.c +++ b/auto-generated/vector-crypto/api-testing/vclmulh.c @@ -1,5 +1,5 @@ -#include #include +#include vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m1(vs2, vs1, vl); @@ -33,34 +33,42 @@ vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m8(vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m8_m(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vclz.c b/auto-generated/vector-crypto/api-testing/vclz.c index 1fa92a927..fc05c6572 100644 --- a/auto-generated/vector-crypto/api-testing/vclz.c +++ b/auto-generated/vector-crypto/api-testing/vclz.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vclz_v_u8mf8(vs2, vl); diff --git a/auto-generated/vector-crypto/api-testing/vcpop.c b/auto-generated/vector-crypto/api-testing/vcpop.c index d3c52d8fd..b0c500152 100644 --- a/auto-generated/vector-crypto/api-testing/vcpop.c +++ b/auto-generated/vector-crypto/api-testing/vcpop.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vcpop_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf8(vs2, vl); diff --git a/auto-generated/vector-crypto/api-testing/vctz.c b/auto-generated/vector-crypto/api-testing/vctz.c index eadb46e90..7635ae51e 100644 --- a/auto-generated/vector-crypto/api-testing/vctz.c +++ b/auto-generated/vector-crypto/api-testing/vctz.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vctz_v_u8mf8(vs2, vl); diff --git a/auto-generated/vector-crypto/api-testing/vghsh.c b/auto-generated/vector-crypto/api-testing/vghsh.c index accbf01e5..3642b6179 100644 --- a/auto-generated/vector-crypto/api-testing/vghsh.c +++ b/auto-generated/vector-crypto/api-testing/vghsh.c @@ -1,22 +1,27 @@ -#include #include +#include -vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32mf2(vd, vs2, vs1, vl); } -vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m1(vd, vs2, vs1, vl); } -vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m2(vd, vs2, vs1, vl); } -vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m4(vd, vs2, vs1, vl); } -vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m8(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vgmul.c b/auto-generated/vector-crypto/api-testing/vgmul.c index 4d9028a54..684fac34f 100644 --- a/auto-generated/vector-crypto/api-testing/vgmul.c +++ b/auto-generated/vector-crypto/api-testing/vgmul.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vgmul_vv_u32mf2(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vrev8.c b/auto-generated/vector-crypto/api-testing/vrev8.c index c0b367a61..06acaaba5 100644 --- a/auto-generated/vector-crypto/api-testing/vrev8.c +++ b/auto-generated/vector-crypto/api-testing/vrev8.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf8(vs2, vl); diff --git a/auto-generated/vector-crypto/api-testing/vrol.c b/auto-generated/vector-crypto/api-testing/vrol.c index f4ee9ffbb..5fd4b2c37 100644 --- a/auto-generated/vector-crypto/api-testing/vrol.c +++ b/auto-generated/vector-crypto/api-testing/vrol.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf8(vs2, vs1, vl); @@ -57,7 +57,8 @@ vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m8(vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf4(vs2, vs1, vl); } @@ -65,7 +66,8 @@ vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4(vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf2(vs2, vs1, vl); } @@ -105,7 +107,8 @@ vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m8(vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32mf2(vs2, vs1, vl); } @@ -177,178 +180,222 @@ vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrol_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m8_m(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vror.c b/auto-generated/vector-crypto/api-testing/vror.c index 9c8f32431..56e23b2a6 100644 --- a/auto-generated/vector-crypto/api-testing/vror.c +++ b/auto-generated/vector-crypto/api-testing/vror.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vror_vv_u8mf8(vs2, vs1, vl); @@ -57,7 +57,8 @@ vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8m8(vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf4(vs2, vs1, vl); } @@ -65,7 +66,8 @@ vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4(vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf2(vs2, vs1, vl); } @@ -105,7 +107,8 @@ vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m8(vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u32mf2(vs2, vs1, vl); } @@ -177,178 +180,222 @@ vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8(vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf8_m(vm, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf8_m(vm, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf4_m(vm, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf4_m(vm, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf2_m(vm, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf2_m(vm, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vror_vv_u8m1_m(vm, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m1_m(vm, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vror_vv_u8m2_m(vm, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m2_m(vm, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vror_vv_u8m4_m(vm, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m4_m(vm, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vror_vv_u8m8_m(vm, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m8_m(vm, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vror_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vror_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vror_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vror_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vror_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vror_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vror_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vror_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vror_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vror_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vror_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vror_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vror_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vror_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vror_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m8_m(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsha2ch.c b/auto-generated/vector-crypto/api-testing/vsha2ch.c index 89c32480f..53fcaf393 100644 --- a/auto-generated/vector-crypto/api-testing/vsha2ch.c +++ b/auto-generated/vector-crypto/api-testing/vsha2ch.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32mf2(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m1(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m2(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m4(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m8(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m1(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m2(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m4(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m8(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsha2cl.c b/auto-generated/vector-crypto/api-testing/vsha2cl.c index f213d6477..d8d72d0c0 100644 --- a/auto-generated/vector-crypto/api-testing/vsha2cl.c +++ b/auto-generated/vector-crypto/api-testing/vsha2cl.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32mf2(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m1(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m2(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m4(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m8(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m1(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m2(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m4(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m8(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsha2ms.c b/auto-generated/vector-crypto/api-testing/vsha2ms.c index 77ef0289a..bd1e17413 100644 --- a/auto-generated/vector-crypto/api-testing/vsha2ms.c +++ b/auto-generated/vector-crypto/api-testing/vsha2ms.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32mf2(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m1(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m2(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m4(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m8(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m1(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m2(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m4(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m8(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsm3c.c b/auto-generated/vector-crypto/api-testing/vsm3c.c index 67d0f776f..4baa33693 100644 --- a/auto-generated/vector-crypto/api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/api-testing/vsm3c.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm3c_vi_u32mf2(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsm3me.c b/auto-generated/vector-crypto/api-testing/vsm3me.c index 5307ba8bb..790f7e5b0 100644 --- a/auto-generated/vector-crypto/api-testing/vsm3me.c +++ b/auto-generated/vector-crypto/api-testing/vsm3me.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsm3me_vv_u32mf2(vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vsm4k.c b/auto-generated/vector-crypto/api-testing/vsm4k.c index a33e29d8a..739baa15e 100644 --- a/auto-generated/vector-crypto/api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/api-testing/vsm4k.c @@ -1,5 +1,5 @@ -#include #include +#include vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k_vi_u32mf2(vs2, 0, vl); diff --git a/auto-generated/vector-crypto/api-testing/vsm4r.c b/auto-generated/vector-crypto/api-testing/vsm4r.c index b0c2fdfe1..069127b7b 100644 --- a/auto-generated/vector-crypto/api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/api-testing/vsm4r.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vv_u32mf2(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32mf2(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m1(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m8(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m1(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m8(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m2(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m8(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m4_u32m4(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m4_u32m8(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/api-testing/vwsll.c b/auto-generated/vector-crypto/api-testing/vwsll.c index 5e6a1a884..acfadeff7 100644 --- a/auto-generated/vector-crypto/api-testing/vwsll.c +++ b/auto-generated/vector-crypto/api-testing/vwsll.c @@ -1,5 +1,5 @@ -#include #include +#include vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4(vs2, vs1, vl); @@ -49,7 +49,8 @@ vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8(vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32mf2(vs2, vs1, vl); } @@ -121,122 +122,152 @@ vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8(vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_m(vm, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16mf4_m(vm, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_m(vm, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16mf2_m(vm, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_m(vm, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m1_m(vm, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m2_m(vm, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m2_m(vm, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m4_m(vm, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m4_m(vm, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m8_m(vm, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m8_m(vm, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_m(vm, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32mf2_m(vm, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_m(vm, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m1_m(vm, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_m(vm, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m2_m(vm, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m4_m(vm, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m4_m(vm, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m8_m(vm, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m8_m(vm, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_m(vm, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m1_m(vm, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_m(vm, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m2_m(vm, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_m(vm, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m4_m(vm, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m8_m(vm, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m8_m(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c index a240f30cd..89ac4ddae 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdf.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vaesdf_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vaesdf_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vaesdf_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdf_vv(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c index 44e4a38fb..25d8104c5 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesdm.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vaesdm_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vaesdm_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vaesdm_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesdm_vv(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c index 8a032c2f8..32325d1f4 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesef.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vaesef_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vaesef_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vaesef_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesef_vv(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c index e6f666ea6..4a52fd90f 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesem.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vaesem_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vaesem_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vaesem_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vaesem_vv(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c index 73358e70e..4bc62bb00 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf1.c @@ -1,5 +1,5 @@ -#include #include +#include vuint32mf2_t test_vaeskf1_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vaeskf1(vs2, 0, vl); diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c index a15310d57..eb6a9d751 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaeskf2.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf2(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c index 76a5d32fc..7f7c1d721 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vaesz.c @@ -1,58 +1,72 @@ -#include #include +#include -vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vandn.c b/auto-generated/vector-crypto/overloaded-api-testing/vandn.c index 61d7a594f..6a05a684e 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vandn.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vandn.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vandn_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vandn(vs2, vs1, vl); @@ -57,7 +57,8 @@ vuint8m8_t test_vandn_vx_u8m8(vuint8m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn(vs2, vs1, vl); } @@ -65,7 +66,8 @@ vuint16mf4_t test_vandn_vx_u16mf4(vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn(vs2, vs1, vl); } @@ -105,7 +107,8 @@ vuint16m8_t test_vandn_vx_u16m8(vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn(vs2, vs1, vl); } @@ -177,178 +180,222 @@ vuint64m8_t test_vandn_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn(vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vandn(vm, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vandn(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c b/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c index 5a27daa73..624410303 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vbrev.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vbrev_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev(vs2, vl); diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c b/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c index 9d0d77b91..9d2e33a81 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vbrev8.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vbrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8(vs2, vl); diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c b/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c index cf48adf9c..8475b9416 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclmul.c @@ -1,5 +1,5 @@ -#include #include +#include vuint64m1_t test_vclmul_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { return __riscv_vclmul(vs2, vs1, vl); @@ -33,34 +33,42 @@ vuint64m8_t test_vclmul_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul(vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vclmul(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c b/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c index 7000a93e5..a2f4a724a 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclmulh.c @@ -1,5 +1,5 @@ -#include #include +#include vuint64m1_t test_vclmulh_vv_u64m1(vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { return __riscv_vclmulh(vs2, vs1, vl); @@ -33,34 +33,42 @@ vuint64m8_t test_vclmulh_vx_u64m8(vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh(vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh(vm, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vclmulh(vm, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vclz.c b/auto-generated/vector-crypto/overloaded-api-testing/vclz.c index d93faf0f3..9625fdaaf 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vclz.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vclz.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vclz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vclz(vs2, vl); diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vcpop.c b/auto-generated/vector-crypto/overloaded-api-testing/vcpop.c index cf5ec1edd..520ad7292 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vcpop.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vcpop.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vcpop_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vcpop(vs2, vl); diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vctz.c b/auto-generated/vector-crypto/overloaded-api-testing/vctz.c index 51d6c57e9..ed4a213cc 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vctz.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vctz.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vctz_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vctz(vs2, vl); diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c b/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c index 055ce6727..59ef64466 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vghsh.c @@ -1,22 +1,27 @@ -#include #include +#include -vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vghsh_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } -vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vghsh_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } -vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vghsh_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } -vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vghsh_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } -vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vghsh_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vghsh(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c b/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c index 4067ca01b..e78e620c3 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vgmul.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vgmul_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vgmul(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c b/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c index 3391569f2..1d76a3980 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vrev8.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vrev8_v_u8mf8(vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8(vs2, vl); diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vrol.c b/auto-generated/vector-crypto/overloaded-api-testing/vrol.c index a1900207c..31c6af020 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vrol.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vrol.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vrol_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vrol(vs2, vs1, vl); @@ -57,7 +57,8 @@ vuint8m8_t test_vrol_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol(vs2, vs1, vl); } @@ -65,7 +66,8 @@ vuint16mf4_t test_vrol_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol(vs2, vs1, vl); } @@ -105,7 +107,8 @@ vuint16m8_t test_vrol_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol(vs2, vs1, vl); } @@ -177,178 +180,222 @@ vuint64m8_t test_vrol_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol(vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrol(vm, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vror.c b/auto-generated/vector-crypto/overloaded-api-testing/vror.c index e87ad43c8..f4398c744 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vror.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vror.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vror_vv_u8mf8(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vror(vs2, vs1, vl); @@ -57,7 +57,8 @@ vuint8m8_t test_vror_vx_u8m8(vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror(vs2, vs1, vl); } @@ -65,7 +66,8 @@ vuint16mf4_t test_vror_vx_u16mf4(vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2(vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror(vs2, vs1, vl); } @@ -105,7 +107,8 @@ vuint16m8_t test_vror_vx_u16m8(vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror(vs2, vs1, vl); } @@ -177,178 +180,222 @@ vuint64m8_t test_vror_vx_u64m8(vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror(vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_m(vbool1_t vm, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_m(vbool1_t vm, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_m(vbool2_t vm, vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_m(vbool2_t vm, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_m(vbool4_t vm, vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_m(vbool4_t vm, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_m(vbool64_t vm, vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_m(vbool64_t vm, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_m(vbool32_t vm, vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_m(vbool32_t vm, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_m(vbool16_t vm, vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_m(vbool16_t vm, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_m(vbool8_t vm, vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vror(vm, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c index d04129849..492101429 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ch.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ch_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ch_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ch_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ch_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ch_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ch_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ch_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ch_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ch_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c index 4de7b49aa..8a9124df0 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2cl.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2cl_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2cl_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2cl_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2cl_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2cl_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2cl_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2cl_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2cl_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2cl_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c index 70a696804..f4532bec8 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsha2ms.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ms_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ms_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ms_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ms_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ms_vv_u32m8(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ms_vv_u64m1(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ms_vv_u64m2(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ms_vv_u64m4(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ms_vv_u64m8(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c index 728566e46..d7b5971af 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm3c.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm3c(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c index 299159174..f0dfdd5cb 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm3me.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsm3me_vv_u32mf2(vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsm3me(vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c index 882694054..2f64557c9 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4k.c @@ -1,5 +1,5 @@ -#include #include +#include vuint32mf2_t test_vsm4k_vi_u32mf2(vuint32mf2_t vs2, size_t vl) { return __riscv_vsm4k(vs2, 0, vl); diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c index cb106c8a5..56a4e08d9 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vsm4r.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vv_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vsm4r_vv_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vsm4r_vv_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vsm4r_vv_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c b/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c index 8696b7d1d..3ccf51ea9 100644 --- a/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c +++ b/auto-generated/vector-crypto/overloaded-api-testing/vwsll.c @@ -1,5 +1,5 @@ -#include #include +#include vuint16mf4_t test_vwsll_vv_u16mf4(vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll(vs2, vs1, vl); @@ -49,7 +49,8 @@ vuint16m8_t test_vwsll_vx_u16m8(vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2(vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll(vs2, vs1, vl); } @@ -121,122 +122,152 @@ vuint64m8_t test_vwsll_vx_u64m8(vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll(vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_m(vbool64_t vm, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_m(vbool32_t vm, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_m(vbool16_t vm, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_m(vbool8_t vm, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_m(vbool8_t vm, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_m(vbool4_t vm, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_m(vbool4_t vm, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_m(vbool2_t vm, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_m(vbool2_t vm, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_m(vbool64_t vm, vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_m(vbool32_t vm, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_m(vbool16_t vm, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_m(vbool16_t vm, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_m(vbool8_t vm, vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_m(vbool8_t vm, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_m(vbool4_t vm, vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_m(vbool4_t vm, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_m(vbool64_t vm, vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_m(vbool32_t vm, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_m(vbool32_t vm, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_m(vbool16_t vm, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_m(vbool16_t vm, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_m(vbool8_t vm, vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsll(vm, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_m(vbool8_t vm, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll(vm, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c index 43eef93e8..8f744ae0e 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdf.c @@ -1,78 +1,97 @@ -#include #include +#include -vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c index 3c1d89651..04edc8d6c 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesdm.c @@ -1,78 +1,97 @@ -#include #include +#include -vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c index 1b82fcd8c..c9545d7be 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesef.c @@ -1,78 +1,97 @@ -#include #include +#include -vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c index 1db0f1bda..d3395b8f4 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesem.c @@ -1,78 +1,97 @@ -#include #include +#include -vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c index 4bbd0fb10..2836c4176 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf1.c @@ -1,22 +1,27 @@ -#include #include +#include -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32m8_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c index 30150c660..d631d1095 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaeskf2.c @@ -1,22 +1,27 @@ -#include #include +#include -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c index 25486191d..d54e3f162 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vaesz.c @@ -1,58 +1,72 @@ -#include #include +#include -vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c index 786635b20..96b7d173c 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vandn.c @@ -1,706 +1,939 @@ -#include #include +#include -vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vandn_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vandn_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vandn_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vandn_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vandn_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vandn_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vandn_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vandn_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vandn_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vandn_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vandn_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vandn_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vandn_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vandn_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vandn_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vandn_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vandn_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vandn_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vandn_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vandn_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vandn_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vandn_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vandn_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vandn_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vandn_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vandn_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vandn_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vandn_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vandn_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vandn_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vandn_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c index 5a16e6adf..05f10027f 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vbrev_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf8_tu(vd, vs2, vl); @@ -29,11 +29,13 @@ vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vbrev_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vbrev_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vbrev_v_u16mf2_tu(vd, vs2, vl); } @@ -53,7 +55,8 @@ vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vbrev_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vbrev_v_u32mf2_tu(vd, vs2, vl); } @@ -89,266 +92,332 @@ vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vbrev_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vbrev_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vbrev_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vbrev_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vbrev_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vbrev_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vbrev_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vbrev_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vbrev_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vbrev_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c index 6186201fc..fcd9aacb4 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vbrev8.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vbrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8_tu(vd, vs2, vl); @@ -29,11 +29,13 @@ vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16mf2_tu(vd, vs2, vl); } @@ -53,7 +55,8 @@ vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u32mf2_tu(vd, vs2, vl); } @@ -89,266 +92,332 @@ vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c index 22f2b9b4b..3366b4eae 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmul.c @@ -1,130 +1,178 @@ -#include #include +#include -vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c index a43662a06..f3ae63f11 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vclmulh.c @@ -1,130 +1,182 @@ -#include #include +#include -vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vclz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vclz.c index 6e3e1120f..8d3150e97 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vclz.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vclz.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vclz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { return __riscv_vclz_v_u8mf8_tu(vd, vs2, vl); @@ -29,11 +29,13 @@ vuint8m8_t test_vclz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vclz_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vclz_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vclz_v_u16mf2_tu(vd, vs2, vl); } @@ -53,7 +55,8 @@ vuint16m8_t test_vclz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vclz_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vclz_v_u32mf2_tu(vd, vs2, vl); } @@ -89,266 +92,332 @@ vuint64m8_t test_vclz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vclz_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vclz_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vclz_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vclz_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vclz_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vclz_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vclz_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vclz_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vclz_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vclz_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vclz_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vclz_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vclz_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vclz_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vclz_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vclz_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vclz_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vclz_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vclz_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vclz_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vclz_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vclz_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vclz_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vclz_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vclz_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vclz_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vclz_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vclz_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vclz_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vclz_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vclz_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vclz_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vclz_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vclz_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vclz_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vclz_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vclz_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vclz_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vclz_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vclz_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vclz_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vclz_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vclz_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vclz_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vclz_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vclz_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vclz_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vclz_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vclz_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vclz_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vclz_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vclz_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vclz_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vclz_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vclz_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vclz_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vclz_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vclz_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vclz_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vclz_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vclz_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vcpop.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vcpop.c index 7dbb9b78c..167388be7 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vcpop.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vcpop.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vcpop_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf8_tu(vd, vs2, vl); @@ -29,11 +29,13 @@ vuint8m8_t test_vcpop_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vcpop_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vcpop_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vcpop_v_u16mf2_tu(vd, vs2, vl); } @@ -53,7 +55,8 @@ vuint16m8_t test_vcpop_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vcpop_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vcpop_v_u32mf2_tu(vd, vs2, vl); } @@ -89,266 +92,332 @@ vuint64m8_t test_vcpop_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vcpop_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vcpop_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vcpop_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vcpop_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vcpop_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vcpop_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vcpop_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vcpop_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vcpop_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vcpop_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vcpop_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vcpop_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vcpop_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vcpop_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vcpop_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vcpop_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vcpop_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vcpop_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vcpop_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vcpop_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vcpop_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vcpop_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vctz.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vctz.c index b191067e8..ca565cdf0 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vctz.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vctz.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vctz_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { return __riscv_vctz_v_u8mf8_tu(vd, vs2, vl); @@ -29,11 +29,13 @@ vuint8m8_t test_vctz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vctz_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vctz_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vctz_v_u16mf2_tu(vd, vs2, vl); } @@ -53,7 +55,8 @@ vuint16m8_t test_vctz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vctz_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vctz_v_u32mf2_tu(vd, vs2, vl); } @@ -89,266 +92,332 @@ vuint64m8_t test_vctz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vctz_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vctz_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vctz_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vctz_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vctz_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vctz_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vctz_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vctz_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vctz_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vctz_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vctz_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vctz_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vctz_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vctz_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vctz_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vctz_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vctz_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vctz_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vctz_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vctz_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vctz_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vctz_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vctz_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vctz_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vctz_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vctz_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vctz_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vctz_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vctz_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vctz_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vctz_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vctz_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vctz_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vctz_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vctz_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vctz_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vctz_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vctz_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vctz_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vctz_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vctz_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vctz_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vctz_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vctz_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vctz_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vctz_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vctz_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vctz_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vctz_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vctz_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vctz_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vctz_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vctz_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vctz_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vctz_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vctz_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vctz_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vctz_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vctz_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vctz_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vctz_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c index 731050d9c..a93cc8fe8 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vghsh.c @@ -1,22 +1,27 @@ -#include #include +#include -vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c index ed035adf4..5f176ce1d 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vgmul.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vgmul_vv_u32mf2_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c index ef1976f3e..71dbc1d32 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vrev8.c @@ -1,5 +1,5 @@ -#include #include +#include vuint8mf8_t test_vrev8_v_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf8_tu(vd, vs2, vl); @@ -29,11 +29,13 @@ vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vrev8_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vrev8_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vrev8_v_u16mf2_tu(vd, vs2, vl); } @@ -53,7 +55,8 @@ vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vrev8_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vrev8_v_u32mf2_tu(vd, vs2, vl); } @@ -89,266 +92,332 @@ vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vrev8_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vrev8_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vrev8_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vrev8_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vrev8_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vrev8_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vrev8_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vrev8_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vrev8_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vrev8_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vrev8_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vrev8_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vrev8_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vrev8_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vrev8_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vrev8_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vrev8_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vrev8_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vrev8_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vrev8_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vrev8_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vrev8_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c index d630a488c..d48fa7214 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vrol.c @@ -1,706 +1,915 @@ -#include #include +#include -vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrol_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrol_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrol_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrol_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrol_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vrol_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vrol_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrol_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrol_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vrol_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vrol_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vrol_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrol_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrol_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrol_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrol_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrol_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrol_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrol_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrol_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrol_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrol_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrol_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrol_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrol_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrol_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrol_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrol_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrol_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrol_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrol_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrol_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrol_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrol_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrol_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vrol_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vrol_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrol_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrol_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vrol_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vrol_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vrol_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrol_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c index f62f3eb6e..68acedb20 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vror.c @@ -1,706 +1,915 @@ -#include #include +#include -vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vror_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vror_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vror_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vror_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vror_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vror_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vror_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vror_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vror_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vror_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vror_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vror_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vror_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vror_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vror_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vror_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vror_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vror_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vror_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vror_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vror_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vror_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vror_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vror_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vror_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vror_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vror_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vror_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vror_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vror_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vror_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vror_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vror_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vror_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vror_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vror_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vror_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vror_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vror_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vror_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vror_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vror_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vror_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vror_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vror_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vror_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vror_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vror_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vror_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vror_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vror_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vror_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vror_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vror_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vror_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vror_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vror_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vror_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vror_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vror_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vror_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vror_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vror_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vror_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vror_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vror_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vror_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vror_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vror_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vror_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vror_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vror_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vror_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c index 1d9b85bc0..e9c0316d1 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ch.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c index 468a4d938..05ab17663 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2cl.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c index 9ee82d425..df4ef75fc 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsha2ms.c @@ -1,38 +1,47 @@ -#include #include +#include -vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c index f420557dc..b8642b667 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3c.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c index 9b635b0d8..9b9615adb 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm3me.c @@ -1,22 +1,27 @@ -#include #include +#include -vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c index 270812106..ae36c36e4 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4k.c @@ -1,7 +1,8 @@ -#include #include +#include -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4k_vi_u32mf2_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c index 4c95663f3..9dcdb8818 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vsm4r.c @@ -1,27 +1,33 @@ -#include #include +#include -vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m8_tu(vd, vs2, vl); } @@ -29,19 +35,23 @@ vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m8_tu(vd, vs2, vl); } @@ -49,15 +59,18 @@ vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m8_tu(vd, vs2, vl); } @@ -65,11 +78,13 @@ vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m4_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c b/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c index 56b35568a..b93b19f8d 100644 --- a/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/api-testing/vwsll.c @@ -1,482 +1,639 @@ -#include #include +#include -vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c index 990433721..a7b8c5908 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c @@ -12,78 +12,97 @@ #include -vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdf_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdf_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdf_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdf_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdf_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdf_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesdf_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c index 80a243721..fa584ff88 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c @@ -12,78 +12,97 @@ #include -vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesdm_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesdm_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesdm_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesdm_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesdm_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesdm_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesdm_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c index 224ac4953..5c86c8a5f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c @@ -12,78 +12,97 @@ #include -vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesef_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesef_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesef_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesef_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesef_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesef_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesef_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c index fa0a10105..2b3953414 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c @@ -12,78 +12,97 @@ #include -vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesem_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesem_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesem_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesem_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesem_vs_u32m4_u32m8_tu(vd, vs2, vl); } -vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaesem_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaesem_vv_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c index cc4667e80..dd6db77aa 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c @@ -12,22 +12,27 @@ #include -vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf1_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaeskf1_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaeskf1_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaeskf1_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaeskf1_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaeskf1_vi_u32m8_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c index 7f05b473c..3dcac9ffe 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c @@ -12,22 +12,27 @@ #include -vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaeskf2_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32mf2_tu(vd, vs2, 0, vl); } -vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaeskf2_vi_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32m1_tu(vd, vs2, 0, vl); } -vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaeskf2_vi_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32m2_tu(vd, vs2, 0, vl); } -vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaeskf2_vi_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32m4_tu(vd, vs2, 0, vl); } -vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vaeskf2_vi_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vaeskf2_vi_u32m8_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c index f50cae600..2c4925eb3 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c @@ -12,58 +12,72 @@ #include -vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vaesz_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32mf2_u32m8_tu(vd, vs2, vl); } -vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vaesz_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m1_u32m8_tu(vd, vs2, vl); } -vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vaesz_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m2_u32m8_tu(vd, vs2, vl); } -vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vaesz_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vaesz_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vaesz_vs_u32m4_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c index 8e79acfdd..2de24fc21 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c @@ -12,706 +12,939 @@ #include -vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vandn_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vandn_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vandn_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vandn_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vandn_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vandn_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vandn_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vandn_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vandn_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vandn_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vandn_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vandn_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vandn_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vandn_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vandn_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vandn_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vandn_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vandn_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vandn_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vandn_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vandn_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vandn_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vandn_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vandn_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vandn_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vandn_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vandn_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vandn_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vandn_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vandn_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vandn_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vandn_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vandn_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vandn_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vandn_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vandn_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vandn_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vandn_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vandn_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vandn_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vandn_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vandn_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vandn_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vandn_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vandn_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vandn_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vandn_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vandn_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vandn_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vandn_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vandn_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vandn_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vandn_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vandn_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vandn_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vandn_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vandn_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vandn_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vandn_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vandn_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vandn_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vandn_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vandn_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vandn_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vandn_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vandn_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vandn_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vandn_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vandn_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vandn_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vandn_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vandn_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vandn_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vandn_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vandn_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vandn_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vandn_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vandn_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vandn_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vandn_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vandn_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vandn_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vandn_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vandn_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c index 1faa2260e..764297558 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c @@ -40,11 +40,13 @@ vuint8m8_t test_vbrev_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vbrev_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vbrev_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vbrev_v_u16mf2_tu(vd, vs2, vl); } @@ -64,7 +66,8 @@ vuint16m8_t test_vbrev_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vbrev_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vbrev_v_u32mf2_tu(vd, vs2, vl); } @@ -100,266 +103,332 @@ vuint64m8_t test_vbrev_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vbrev_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vbrev_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vbrev_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vbrev_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vbrev_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vbrev_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vbrev_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vbrev_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vbrev_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vbrev_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vbrev_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c index 737992ff9..abbd91ffa 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c @@ -40,11 +40,13 @@ vuint8m8_t test_vbrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16mf2_tu(vd, vs2, vl); } @@ -64,7 +66,8 @@ vuint16m8_t test_vbrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u32mf2_tu(vd, vs2, vl); } @@ -100,266 +103,332 @@ vuint64m8_t test_vbrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vbrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vbrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vbrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vbrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vbrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vbrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vbrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vbrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vbrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vbrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vbrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vbrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vbrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vbrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vbrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vbrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vbrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vbrev8_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vbrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vbrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vbrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vbrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vbrev8_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vbrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vbrev8_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c index c776dacad..a7793a209 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c @@ -12,130 +12,178 @@ #include -vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vclmul_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmul_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmul_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmul_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c index 94df486ca..3962a5f4a 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c @@ -12,130 +12,182 @@ #include -vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vclmulh_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vclmulh_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vclmulh_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vclmulh_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vclmulh_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vclmulh_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vclmulh_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vclmulh_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vclmulh_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vclmulh_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vclmulh_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vclmulh_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c index d9c132cd7..44a5e7fce 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c @@ -40,11 +40,13 @@ vuint8m8_t test_vclz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vclz_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vclz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vclz_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vclz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vclz_v_u16mf2_tu(vd, vs2, vl); } @@ -64,7 +66,8 @@ vuint16m8_t test_vclz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vclz_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vclz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vclz_v_u32mf2_tu(vd, vs2, vl); } @@ -100,266 +103,332 @@ vuint64m8_t test_vclz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vclz_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vclz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vclz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vclz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vclz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vclz_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vclz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vclz_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vclz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vclz_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vclz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vclz_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vclz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vclz_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vclz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vclz_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vclz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vclz_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vclz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vclz_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vclz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vclz_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vclz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vclz_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vclz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vclz_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vclz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vclz_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vclz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vclz_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vclz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vclz_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vclz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vclz_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vclz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vclz_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vclz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vclz_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vclz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vclz_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vclz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vclz_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vclz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vclz_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vclz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vclz_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vclz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vclz_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vclz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vclz_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vclz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vclz_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vclz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vclz_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vclz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vclz_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vclz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vclz_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vclz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vclz_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vclz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vclz_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vclz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vclz_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vclz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vclz_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vclz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vclz_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vclz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vclz_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vclz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vclz_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vclz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vclz_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vclz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vclz_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vclz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vclz_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vclz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vclz_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vclz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vclz_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vclz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vclz_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vclz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vclz_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vclz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vclz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vclz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vclz_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vclz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vclz_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vclz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vclz_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vclz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vclz_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vclz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vclz_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vclz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vclz_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vclz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vclz_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vclz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vclz_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vclz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vclz_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vclz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vclz_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vclz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vclz_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vclz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vclz_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vclz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vclz_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vclz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vclz_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vclz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vclz_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vclz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vclz_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vclz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vclz_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vclz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vclz_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vclz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vclz_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vclz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vclz_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c index 2f89711dc..757deb078 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c @@ -33,11 +33,13 @@ vuint8m8_t test_vcpop_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vcpop_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vcpop_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vcpop_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vcpop_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vcpop_v_u16mf2_tu(vd, vs2, vl); } @@ -57,7 +59,8 @@ vuint16m8_t test_vcpop_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vcpop_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vcpop_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vcpop_v_u32mf2_tu(vd, vs2, vl); } @@ -93,266 +96,332 @@ vuint64m8_t test_vcpop_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vcpop_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vcpop_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vcpop_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vcpop_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vcpop_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vcpop_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vcpop_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vcpop_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vcpop_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vcpop_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vcpop_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vcpop_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vcpop_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vcpop_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vcpop_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vcpop_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vcpop_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vcpop_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vcpop_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vcpop_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vcpop_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vcpop_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vcpop_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vcpop_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vcpop_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vcpop_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vcpop_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vcpop_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vcpop_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vcpop_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vcpop_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vcpop_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vcpop_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vcpop_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vcpop_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vcpop_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vcpop_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vcpop_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vcpop_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vcpop_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vcpop_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vcpop_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vcpop_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vcpop_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vcpop_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vcpop_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vcpop_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vcpop_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vcpop_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vcpop_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vcpop_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vcpop_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vcpop_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vcpop_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vcpop_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vcpop_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vcpop_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vcpop_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vcpop_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vcpop_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vcpop_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vcpop_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vcpop_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vcpop_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vcpop_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vcpop_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vcpop_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vcpop_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vcpop_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vcpop_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vcpop_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vcpop_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vcpop_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vcpop_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vcpop_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vcpop_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vcpop_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vcpop_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vcpop_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vcpop_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vcpop_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vcpop_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vcpop_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vcpop_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vcpop_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vcpop_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vcpop_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vcpop_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vcpop_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vcpop_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c index 54d7ee887..3c13ebff5 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c @@ -40,11 +40,13 @@ vuint8m8_t test_vctz_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vctz_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vctz_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vctz_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vctz_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vctz_v_u16mf2_tu(vd, vs2, vl); } @@ -64,7 +66,8 @@ vuint16m8_t test_vctz_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vctz_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vctz_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vctz_v_u32mf2_tu(vd, vs2, vl); } @@ -100,266 +103,332 @@ vuint64m8_t test_vctz_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vctz_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vctz_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vctz_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vctz_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vctz_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vctz_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vctz_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vctz_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vctz_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vctz_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vctz_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vctz_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vctz_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vctz_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vctz_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vctz_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vctz_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vctz_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vctz_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vctz_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vctz_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vctz_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vctz_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vctz_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vctz_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vctz_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vctz_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vctz_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vctz_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vctz_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vctz_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vctz_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vctz_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vctz_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vctz_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vctz_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vctz_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vctz_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vctz_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vctz_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vctz_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vctz_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vctz_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vctz_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vctz_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vctz_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vctz_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vctz_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vctz_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vctz_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vctz_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vctz_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vctz_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vctz_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vctz_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vctz_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vctz_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vctz_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vctz_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vctz_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vctz_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vctz_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vctz_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vctz_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vctz_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vctz_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vctz_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vctz_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vctz_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vctz_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vctz_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vctz_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vctz_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vctz_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vctz_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vctz_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vctz_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vctz_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vctz_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vctz_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vctz_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vctz_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vctz_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vctz_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vctz_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vctz_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vctz_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vctz_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vctz_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vctz_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vctz_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vctz_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vctz_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vctz_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vctz_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vctz_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vctz_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vctz_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vctz_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vctz_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vctz_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vctz_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vctz_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vctz_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vctz_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vctz_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vctz_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vctz_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vctz_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vctz_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vctz_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vctz_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vctz_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vctz_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vctz_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vctz_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vctz_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vctz_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vctz_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vctz_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vctz_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vctz_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vctz_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vctz_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vctz_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vctz_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vctz_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vctz_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c index e3f7395a9..7c773896d 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c @@ -12,22 +12,27 @@ #include -vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vghsh_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vghsh_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vghsh_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vghsh_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vghsh_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vghsh_vv_u32m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c index e4920e5d1..35f8f63da 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c @@ -12,7 +12,8 @@ #include -vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vgmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vgmul_vv_u32mf2_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c index 61471ea81..45db2ce1c 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c @@ -40,11 +40,13 @@ vuint8m8_t test_vrev8_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { return __riscv_vrev8_v_u8m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vrev8_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vrev8_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vrev8_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vrev8_v_u16mf2_tu(vd, vs2, vl); } @@ -64,7 +66,8 @@ vuint16m8_t test_vrev8_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vrev8_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vrev8_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vrev8_v_u32mf2_tu(vd, vs2, vl); } @@ -100,266 +103,332 @@ vuint64m8_t test_vrev8_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { return __riscv_vrev8_v_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vrev8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vrev8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vrev8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vrev8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vrev8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vrev8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vrev8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vrev8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vrev8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vrev8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vrev8_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vrev8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vrev8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vrev8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vrev8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vrev8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vrev8_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vrev8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vrev8_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vrev8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vrev8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vrev8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vrev8_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vrev8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vrev8_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vrev8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vrev8_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vrev8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vrev8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vrev8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vrev8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vrev8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vrev8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vrev8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vrev8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vrev8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vrev8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vrev8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vrev8_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vrev8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vrev8_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vrev8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vrev8_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vrev8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vrev8_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vrev8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vrev8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vrev8_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vrev8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vrev8_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vrev8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vrev8_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vrev8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vrev8_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vrev8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vrev8_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vrev8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vrev8_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vrev8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vrev8_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vrev8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vrev8_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vrev8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vrev8_v_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vrev8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vrev8_v_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vrev8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vrev8_v_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vrev8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vrev8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vrev8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vrev8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vrev8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vrev8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vrev8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vrev8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vrev8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vrev8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vrev8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vrev8_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vrev8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vrev8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vrev8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vrev8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vrev8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vrev8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vrev8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vrev8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vrev8_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c index 0dacd5b3e..f87baacb1 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c @@ -12,706 +12,915 @@ #include -vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrol_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrol_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrol_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrol_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrol_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vrol_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vrol_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrol_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrol_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vrol_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vrol_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vrol_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrol_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vrol_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrol_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrol_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrol_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrol_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrol_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrol_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrol_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrol_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrol_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrol_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrol_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrol_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrol_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrol_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrol_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrol_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vrol_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrol_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vrol_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrol_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vrol_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vrol_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrol_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrol_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vrol_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrol_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrol_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vrol_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrol_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrol_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vrol_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrol_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrol_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vrol_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrol_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vrol_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrol_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vrol_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrol_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrol_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vrol_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrol_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrol_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vrol_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrol_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrol_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vrol_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrol_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrol_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vrol_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrol_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrol_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vrol_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vrol_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrol_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vrol_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vrol_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrol_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vrol_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vrol_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrol_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrol_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vrol_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrol_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrol_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vrol_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrol_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vrol_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vrol_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrol_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vrol_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vrol_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrol_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vrol_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vrol_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrol_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrol_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vrol_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vrol_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c index c28fb02ee..5fd654a54 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c @@ -12,706 +12,915 @@ #include -vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vror_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vror_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vror_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vror_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vror_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vror_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vror_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vror_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vror_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vror_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vror_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vror_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vror_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vror_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vror_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vror_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vror_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vror_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vror_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vror_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vror_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vror_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vror_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vror_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vror_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vror_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vror_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vror_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vror_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vror_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vror_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vror_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vror_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vror_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vror_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vror_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vror_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vror_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vror_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vror_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vror_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vror_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vror_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vror_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vror_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vror_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vror_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vror_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vror_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vror_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vror_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vror_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vror_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vror_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vror_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vror_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vror_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vror_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vror_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vror_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vror_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vror_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vror_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vror_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vror_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vror_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vror_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vror_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vror_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vror_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vror_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vror_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vror_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vror_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vror_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vror_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vror_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vror_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vror_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vror_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vror_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vror_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vror_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vror_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vror_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vror_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vror_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vror_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vror_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vror_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vror_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vror_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vror_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vror_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vror_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vror_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vror_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vror_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vror_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vror_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vror_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vror_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vror_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vror_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vror_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vror_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vror_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vror_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vror_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vror_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vror_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vror_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vror_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vror_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vror_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vror_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vror_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vror_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vror_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vror_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vror_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c index 97c413c75..25b773014 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c @@ -12,38 +12,47 @@ #include -vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ch_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ch_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ch_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ch_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ch_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ch_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ch_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ch_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ch_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ch_vv_u64m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c index 8f43c4416..e12c2dacd 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c @@ -12,38 +12,47 @@ #include -vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2cl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2cl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2cl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2cl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2cl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2cl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2cl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2cl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2cl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2cl_vv_u64m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c index bb48799a5..f438925f4 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c @@ -12,38 +12,47 @@ #include -vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsha2ms_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsha2ms_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsha2ms_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsha2ms_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsha2ms_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsha2ms_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsha2ms_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsha2ms_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsha2ms_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsha2ms_vv_u64m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c index ccf8caa8b..8e783a3d3 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c @@ -12,7 +12,8 @@ #include -vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm3c_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm3c_vi_u32mf2_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c index 3ebf605aa..c651f5ee9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c @@ -12,22 +12,27 @@ #include -vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsm3me_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsm3me_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsm3me_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsm3me_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsm3me_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsm3me_vv_u32m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c index 8f353c311..bc4ce8981 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c @@ -12,7 +12,8 @@ #include -vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4k_vi_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4k_vi_u32mf2_tu(vd, vs2, 0, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c index 06f9b3ffc..7f9a4a749 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c @@ -12,27 +12,33 @@ #include -vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vv_u32mf2_tu(vd, vs2, vl); } -vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vsm4r_vs_u32mf2_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32mf2_u32m2_tu(vuint32m2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32mf2_u32m4_tu(vuint32m4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32mf2_u32m8_tu(vuint32m8_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32mf2_u32m8_tu(vd, vs2, vl); } @@ -40,19 +46,23 @@ vuint32m1_t test_vsm4r_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m1_tu(vd, vs2, vl); } -vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vsm4r_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m1_u32m2_tu(vuint32m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m1_u32m4_tu(vuint32m4_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m1_u32m8_tu(vuint32m8_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m1_u32m8_tu(vd, vs2, vl); } @@ -60,15 +70,18 @@ vuint32m2_t test_vsm4r_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m2_tu(vd, vs2, vl); } -vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vsm4r_vs_u32m2_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m2_u32m4_tu(vuint32m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m2_u32m8_tu(vuint32m8_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m2_u32m8_tu(vd, vs2, vl); } @@ -76,11 +89,13 @@ vuint32m4_t test_vsm4r_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { return __riscv_vsm4r_vv_u32m4_tu(vd, vs2, vl); } -vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vsm4r_vs_u32m4_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m4_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint32m8_t test_vsm4r_vs_u32m4_u32m8_tu(vuint32m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vsm4r_vs_u32m4_u32m8_tu(vd, vs2, vl); } diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c index 63da91ed1..07784ec85 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c @@ -12,482 +12,639 @@ #include -vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vwsll_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vwsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vwsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vwsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vwsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vwsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vwsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vwsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vwsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vwsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vwsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vwsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vwsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vwsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsll_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vwsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwsll_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vwsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vwsll_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } From 06a64adb4ad5899a61668e88aa9face4dbd331a1 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 13 Jun 2024 23:33:55 -0700 Subject: [PATCH 097/151] makefile: add clang-format for all build targets Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/Makefile | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index a8701114f..1532a5fa2 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -176,31 +176,43 @@ gen-gnu-test: gnu-overloaded-test gnu-non-overloaded-test non-overloaded-doc: $(call gen_doc,$(DIR),intrinsic_funcs.adoc,$@,$(EXTRA_FLAG)) $(call gen_doc,$(POLICY_DIR),intrinsic_funcs.adoc,$@,--has-policy $(EXTRA_FLAG)) + $(call clang_format_adoc, --file, $(DIR)/intrinsic_funcs.adoc) + $(call clang_format_adoc, --file, $(POLICY_DIR)/intrinsic_funcs.adoc) # Generate grouped documents for non-overloaded intrinsics non-overloaded-docs: $(call gen_docs,$(DIR),intrinsic_funcs,$@,$(EXTRA_FLAG)) $(call gen_docs,$(POLICY_DIR),intrinsic_funcs,$@,--has-policy $(EXTRA_FLAG)) + $(call clang_format_adoc, --folder, $(DIR)/intrinsic_funcs) + $(call clang_format_adoc, --folder, $(POLICY_DIR)/intrinsic_funcs) # Generate all-in-one document for overloaded intrinsics overloaded-doc: $(call gen_doc,$(DIR),overloaded_intrinsic_funcs.adoc,$@,$(EXTRA_FLAG)) $(call gen_doc,$(POLICY_DIR),overloaded_intrinsic_funcs.adoc,$@,--has-policy $(EXTRA_FLAG)) + $(call clang_format_adoc, --file, $(DIR)/overloaded_intrinsic_funcs.adoc) + $(call clang_format_adoc, --file, $(POLICY_DIR)/overloaded_intrinsic_funcs.adoc) # Generate grouped documents for overloaded intrinsics overloaded-docs: $(call gen_docs,$(DIR),overloaded_intrinsic_funcs,$@,$(EXTRA_FLAG)) $(call gen_docs,$(POLICY_DIR),overloaded_intrinsic_funcs,$@,--has-policy $(EXTRA_FLAG)) + $(call clang_format_adoc, --folder, $(DIR)/overloaded_intrinsic_funcs) + $(call clang_format_adoc, --folder, $(POLICY_DIR)/overloaded_intrinsic_funcs) # Generate non-overloaded intrinsic testing C source files non-overloaded-test: $(call gen_tests,$(DIR)/api-testing,non-overloaded-test,$(EXTRA_FLAG)) $(call gen_tests,$(POLICY_DIR)/api-testing,non-overloaded-test,--has-policy $(EXTRA_FLAG)) + clang-format -i $(DIR)/api-testing/* + clang-format -i $(POLICY_DIR)/api-testing/* # Generate overloaded intrinsic testing C source files overloaded-test: $(call gen_tests,$(DIR)/overloaded-api-testing,overloaded-test,$(EXTRA_FLAG)) $(call gen_tests,$(POLICY_DIR)/overloaded-api-testing,overloaded-test,--has-policy $(EXTRA_FLAG)) + clang-format -i $(DIR)/overloaded-api-testing/* + clang-format -i $(POLICY_DIR)/overloaded-api-testing/* # Generate non-overloaded intrinsic testing C source files llvm-non-overloaded-test: @@ -347,15 +359,19 @@ vector-crypto-llvm-overloaded-test: # Generate the adaptor header for v0.10 non-policy-compatible-header: $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,non-policy.h,non-overloaded-compatible-header,$(EXTRA_FLAG)) + clang-format -i $(DIR)/rvv-v0p10-compatible-headers/non-policy.h + policy-compatible-header: $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,policy.h,non-overloaded-compatible-header,--has-policy $(EXTRA_FLAG)) + clang-format -i $(DIR)/rvv-v0p10-compatible-headers/policy.h non-policy-overloaded-compatible-header: $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,overloaded-non-policy.h,overloaded-compatible-header,$(EXTRA_FLAG)) + clang-format -i $(DIR)/rvv-v0p10-compatible-headers/overloaded-non-policy.h policy-overloaded-compatible-header: $(call gen_doc,$(DIR)/rvv-v0p10-compatible-headers,overloaded-policy.h,overloaded-compatible-header,--has-policy $(EXTRA_FLAG)) - + clang-format -i $(DIR)/rvv-v0p10-compatible-headers/overloaded-policy.h ############################################################################### # Auto-generated Document / Test Targets From ecf49acdfdb952d108fc239f9d91cfd49ddc8edf Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 19 Jun 2024 13:31:45 +0800 Subject: [PATCH 098/151] Grouper: implement dummy inst_group_[pro|epi]logue Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index e88a1c8d7..165953bd2 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -688,6 +688,12 @@ def start_group(self, group_name): if group_name not in self.groups: self.groups[group_name] = [] + def inst_group_prologue(self): + return "" + + def inst_group_epilogue(self): + return "" + def func(self, inst_info, name, return_type, **kwargs): func_name = Generator.func_name(name) From 90e9650258862b695b4bef786cdf779f29718a74 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 19 Jun 2024 13:32:03 +0800 Subject: [PATCH 099/151] Grouper: implement dummy write Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 165953bd2..e3ac88487 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -694,6 +694,9 @@ def inst_group_prologue(self): def inst_group_epilogue(self): return "" + def write(self, text): + pass + def func(self, inst_info, name, return_type, **kwargs): func_name = Generator.func_name(name) From 0968390e437ca475fbe2c1a2181bc21a9982fec3 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 16 Jul 2024 14:32:57 +0800 Subject: [PATCH 100/151] makefile: let replace_float compatible with macOS builtin BSD sed (#346) --- rvv-intrinsic-generator/Makefile | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index 1532a5fa2..211095f18 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -31,10 +31,17 @@ __check_defined = \ $(error Undefined $1$(if $2, ($2)))) # Replace softfloat float-point types with LLVM compatible floating-point types +# macOS uses BSD sed +ifeq ($(shell uname), Darwin) + SED_CMD = sed -i '' +else + SED_CMD = sed -i +endif + replace_float = \ - sed -i 's/float16_t/_Float16/g' $(1)/*; \ - sed -i 's/float32_t/float/g' $(1)/*; \ - sed -i 's/float64_t/double/g' $(1)/* + $(SED_CMD) 's/float16_t/_Float16/g' $(1)/*; \ + $(SED_CMD) 's/float32_t/float/g' $(1)/*; \ + $(SED_CMD) 's/float64_t/double/g' $(1)/* ############################################################################### # Variables From f487f82fe5cca1c12d437b561a4db9dd02d3508c Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Fri, 2 Aug 2024 06:05:37 -0700 Subject: [PATCH 101/151] Correct the Makefile in rvv-intrinsic-generator --- rvv-intrinsic-generator/Makefile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index 211095f18..45f165870 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -228,7 +228,7 @@ llvm-non-overloaded-test: $(call replace_float, $(DIR)/llvm-api-tests) $(call replace_float, $(POLICY_DIR)/llvm-api-tests) clang-format -i $(DIR)/llvm-api-tests/* - clang-format -i $(POLICY_DIR)/overloaded-api-testing/* + clang-format -i $(POLICY_DIR)/llvm-api-tests/* # Generate overloaded intrinsic testing C source files llvm-overloaded-test: @@ -296,7 +296,7 @@ bf16-llvm-non-overloaded-test: $(call replace_float, $(BF16_DIR)/llvm-api-tests) $(call replace_float, $(BF16_POLICY_DIR)/llvm-api-tests) clang-format -i $(BF16_DIR)/llvm-api-tests/* - clang-format -i $(BF16_POLICY_DIR)/overloaded-api-testing/* + clang-format -i $(BF16_POLICY_DIR)/llvm-api-tests/* # Generate overloaded intrinsic testing C source files bf16-llvm-overloaded-test: From 93912ffa6196a1b20b0b9f906cbcaaba600e5383 Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Fri, 2 Aug 2024 06:07:48 -0700 Subject: [PATCH 102/151] [Auto-gen] Update tests under ../auto-generated. (make git-commit-autogen-test) --- .../policy_funcs/llvm-api-tests/vaadd.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vaaddu.c | 603 ++++-- .../policy_funcs/llvm-api-tests/vadc.c | 267 ++- .../policy_funcs/llvm-api-tests/vadd.c | 1104 +++++++---- .../policy_funcs/llvm-api-tests/vand.c | 1104 +++++++---- .../policy_funcs/llvm-api-tests/vasub.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vasubu.c | 603 ++++-- .../policy_funcs/llvm-api-tests/vcompress.c | 177 +- .../policy_funcs/llvm-api-tests/vdiv.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vdivu.c | 585 ++++-- .../policy_funcs/llvm-api-tests/vfabs.c | 180 +- .../policy_funcs/llvm-api-tests/vfadd.c | 867 ++++++--- .../policy_funcs/llvm-api-tests/vfclass.c | 180 +- .../policy_funcs/llvm-api-tests/vfcvt.c | 1440 ++++++++++----- .../policy_funcs/llvm-api-tests/vfcvt_rtz.c | 360 ++-- .../policy_funcs/llvm-api-tests/vfdiv.c | 867 ++++++--- .../policy_funcs/llvm-api-tests/vfmacc.c | 892 ++++++--- .../policy_funcs/llvm-api-tests/vfmadd.c | 892 ++++++--- .../policy_funcs/llvm-api-tests/vfmax.c | 421 +++-- .../policy_funcs/llvm-api-tests/vfmerge.c | 47 +- .../policy_funcs/llvm-api-tests/vfmin.c | 421 +++-- .../policy_funcs/llvm-api-tests/vfmsac.c | 892 ++++++--- .../policy_funcs/llvm-api-tests/vfmsub.c | 892 ++++++--- .../policy_funcs/llvm-api-tests/vfmul.c | 867 ++++++--- .../policy_funcs/llvm-api-tests/vfmv.c | 12 +- .../policy_funcs/llvm-api-tests/vfncvt.c | 1359 +++++++++----- .../policy_funcs/llvm-api-tests/vfncvt_rod.c | 108 +- .../policy_funcs/llvm-api-tests/vfncvt_rtz.c | 360 ++-- .../policy_funcs/llvm-api-tests/vfneg.c | 180 +- .../policy_funcs/llvm-api-tests/vfnmacc.c | 993 +++++++--- .../policy_funcs/llvm-api-tests/vfnmadd.c | 993 +++++++--- .../policy_funcs/llvm-api-tests/vfnmsac.c | 993 +++++++--- .../policy_funcs/llvm-api-tests/vfnmsub.c | 993 +++++++--- .../policy_funcs/llvm-api-tests/vfrdiv.c | 439 +++-- .../policy_funcs/llvm-api-tests/vfrec7.c | 360 ++-- .../policy_funcs/llvm-api-tests/vfredmax.c | 108 +- .../policy_funcs/llvm-api-tests/vfredmin.c | 108 +- .../policy_funcs/llvm-api-tests/vfredosum.c | 324 +++- .../policy_funcs/llvm-api-tests/vfredusum.c | 324 +++- .../policy_funcs/llvm-api-tests/vfrsqrt7.c | 180 +- .../policy_funcs/llvm-api-tests/vfrsub.c | 439 +++-- .../policy_funcs/llvm-api-tests/vfsgnj.c | 430 +++-- .../policy_funcs/llvm-api-tests/vfsgnjn.c | 438 +++-- .../policy_funcs/llvm-api-tests/vfsgnjx.c | 438 +++-- .../llvm-api-tests/vfslide1down.c | 228 ++- .../policy_funcs/llvm-api-tests/vfslide1up.c | 225 ++- .../policy_funcs/llvm-api-tests/vfsqrt.c | 360 ++-- .../policy_funcs/llvm-api-tests/vfsub.c | 867 ++++++--- .../policy_funcs/llvm-api-tests/vfwadd.c | 1069 ++++++++--- .../policy_funcs/llvm-api-tests/vfwcvt.c | 900 ++++++--- .../policy_funcs/llvm-api-tests/vfwcvt_rtz.c | 216 ++- .../policy_funcs/llvm-api-tests/vfwmacc.c | 591 ++++-- .../policy_funcs/llvm-api-tests/vfwmsac.c | 591 ++++-- .../policy_funcs/llvm-api-tests/vfwmul.c | 535 ++++-- .../policy_funcs/llvm-api-tests/vfwnmacc.c | 649 +++++-- .../policy_funcs/llvm-api-tests/vfwnmsac.c | 649 +++++-- .../policy_funcs/llvm-api-tests/vfwredosum.c | 278 ++- .../policy_funcs/llvm-api-tests/vfwredusum.c | 278 ++- .../policy_funcs/llvm-api-tests/vfwsub.c | 1069 ++++++++--- .../policy_funcs/llvm-api-tests/viota.c | 198 +- .../policy_funcs/llvm-api-tests/vle16.c | 204 +- .../policy_funcs/llvm-api-tests/vle16ff.c | 270 ++- .../policy_funcs/llvm-api-tests/vle32.c | 168 +- .../policy_funcs/llvm-api-tests/vle32ff.c | 225 ++- .../policy_funcs/llvm-api-tests/vle64.c | 132 +- .../policy_funcs/llvm-api-tests/vle64ff.c | 180 +- .../policy_funcs/llvm-api-tests/vle8.c | 135 +- .../policy_funcs/llvm-api-tests/vle8ff.c | 194 +- .../policy_funcs/llvm-api-tests/vloxei16.c | 849 ++++++--- .../policy_funcs/llvm-api-tests/vloxei32.c | 776 +++++--- .../policy_funcs/llvm-api-tests/vloxei64.c | 658 +++++-- .../policy_funcs/llvm-api-tests/vloxei8.c | 873 ++++++--- .../llvm-api-tests/vloxseg2ei16.c | 762 ++++++-- .../llvm-api-tests/vloxseg2ei32.c | 732 ++++++-- .../llvm-api-tests/vloxseg2ei64.c | 655 +++++-- .../policy_funcs/llvm-api-tests/vloxseg2ei8.c | 756 ++++++-- .../llvm-api-tests/vloxseg3ei16.c | 591 ++++-- .../llvm-api-tests/vloxseg3ei32.c | 591 ++++-- .../llvm-api-tests/vloxseg3ei64.c | 561 ++++-- .../policy_funcs/llvm-api-tests/vloxseg3ei8.c | 585 ++++-- .../llvm-api-tests/vloxseg4ei16.c | 591 ++++-- .../llvm-api-tests/vloxseg4ei32.c | 591 ++++-- .../llvm-api-tests/vloxseg4ei64.c | 561 ++++-- .../policy_funcs/llvm-api-tests/vloxseg4ei8.c | 585 ++++-- .../llvm-api-tests/vloxseg5ei16.c | 420 +++-- .../llvm-api-tests/vloxseg5ei32.c | 420 +++-- .../llvm-api-tests/vloxseg5ei64.c | 420 +++-- .../policy_funcs/llvm-api-tests/vloxseg5ei8.c | 414 +++-- .../llvm-api-tests/vloxseg6ei16.c | 420 +++-- .../llvm-api-tests/vloxseg6ei32.c | 420 +++-- .../llvm-api-tests/vloxseg6ei64.c | 420 +++-- .../policy_funcs/llvm-api-tests/vloxseg6ei8.c | 414 +++-- .../llvm-api-tests/vloxseg7ei16.c | 420 +++-- .../llvm-api-tests/vloxseg7ei32.c | 420 +++-- .../llvm-api-tests/vloxseg7ei64.c | 420 +++-- .../policy_funcs/llvm-api-tests/vloxseg7ei8.c | 414 +++-- .../llvm-api-tests/vloxseg8ei16.c | 420 +++-- .../llvm-api-tests/vloxseg8ei32.c | 420 +++-- .../llvm-api-tests/vloxseg8ei64.c | 420 +++-- .../policy_funcs/llvm-api-tests/vloxseg8ei8.c | 414 +++-- .../policy_funcs/llvm-api-tests/vlse16.c | 270 ++- .../policy_funcs/llvm-api-tests/vlse32.c | 225 ++- .../policy_funcs/llvm-api-tests/vlse64.c | 180 +- .../policy_funcs/llvm-api-tests/vlse8.c | 184 +- .../policy_funcs/llvm-api-tests/vlseg2e16.c | 180 +- .../policy_funcs/llvm-api-tests/vlseg2e16ff.c | 241 ++- .../policy_funcs/llvm-api-tests/vlseg2e32.c | 144 +- .../policy_funcs/llvm-api-tests/vlseg2e32ff.c | 191 +- .../policy_funcs/llvm-api-tests/vlseg2e64.c | 108 +- .../policy_funcs/llvm-api-tests/vlseg2e64ff.c | 141 +- .../policy_funcs/llvm-api-tests/vlseg2e8.c | 144 +- .../policy_funcs/llvm-api-tests/vlseg2e8ff.c | 180 +- .../policy_funcs/llvm-api-tests/vlseg3e16.c | 144 +- .../policy_funcs/llvm-api-tests/vlseg3e16ff.c | 194 +- .../policy_funcs/llvm-api-tests/vlseg3e32.c | 108 +- .../policy_funcs/llvm-api-tests/vlseg3e32ff.c | 144 +- .../policy_funcs/llvm-api-tests/vlseg3e64.c | 72 +- .../policy_funcs/llvm-api-tests/vlseg3e64ff.c | 94 +- .../policy_funcs/llvm-api-tests/vlseg3e8.c | 120 +- .../policy_funcs/llvm-api-tests/vlseg3e8ff.c | 150 +- .../policy_funcs/llvm-api-tests/vlseg4e16.c | 144 +- .../policy_funcs/llvm-api-tests/vlseg4e16ff.c | 194 +- .../policy_funcs/llvm-api-tests/vlseg4e32.c | 108 +- .../policy_funcs/llvm-api-tests/vlseg4e32ff.c | 144 +- .../policy_funcs/llvm-api-tests/vlseg4e64.c | 72 +- .../policy_funcs/llvm-api-tests/vlseg4e64ff.c | 94 +- .../policy_funcs/llvm-api-tests/vlseg4e8.c | 120 +- .../policy_funcs/llvm-api-tests/vlseg4e8ff.c | 150 +- .../policy_funcs/llvm-api-tests/vlseg5e16.c | 108 +- .../policy_funcs/llvm-api-tests/vlseg5e16ff.c | 147 +- .../policy_funcs/llvm-api-tests/vlseg5e32.c | 72 +- .../policy_funcs/llvm-api-tests/vlseg5e32ff.c | 97 +- .../policy_funcs/llvm-api-tests/vlseg5e64.c | 36 +- .../policy_funcs/llvm-api-tests/vlseg5e64ff.c | 47 +- .../policy_funcs/llvm-api-tests/vlseg5e8.c | 96 +- .../policy_funcs/llvm-api-tests/vlseg5e8ff.c | 120 +- .../policy_funcs/llvm-api-tests/vlseg6e16.c | 108 +- .../policy_funcs/llvm-api-tests/vlseg6e16ff.c | 147 +- .../policy_funcs/llvm-api-tests/vlseg6e32.c | 72 +- .../policy_funcs/llvm-api-tests/vlseg6e32ff.c | 97 +- .../policy_funcs/llvm-api-tests/vlseg6e64.c | 36 +- .../policy_funcs/llvm-api-tests/vlseg6e64ff.c | 47 +- .../policy_funcs/llvm-api-tests/vlseg6e8.c | 96 +- .../policy_funcs/llvm-api-tests/vlseg6e8ff.c | 120 +- .../policy_funcs/llvm-api-tests/vlseg7e16.c | 108 +- .../policy_funcs/llvm-api-tests/vlseg7e16ff.c | 147 +- .../policy_funcs/llvm-api-tests/vlseg7e32.c | 72 +- .../policy_funcs/llvm-api-tests/vlseg7e32ff.c | 97 +- .../policy_funcs/llvm-api-tests/vlseg7e64.c | 36 +- .../policy_funcs/llvm-api-tests/vlseg7e64ff.c | 47 +- .../policy_funcs/llvm-api-tests/vlseg7e8.c | 96 +- .../policy_funcs/llvm-api-tests/vlseg7e8ff.c | 120 +- .../policy_funcs/llvm-api-tests/vlseg8e16.c | 108 +- .../policy_funcs/llvm-api-tests/vlseg8e16ff.c | 147 +- .../policy_funcs/llvm-api-tests/vlseg8e32.c | 72 +- .../policy_funcs/llvm-api-tests/vlseg8e32ff.c | 97 +- .../policy_funcs/llvm-api-tests/vlseg8e64.c | 36 +- .../policy_funcs/llvm-api-tests/vlseg8e64ff.c | 47 +- .../policy_funcs/llvm-api-tests/vlseg8e8.c | 96 +- .../policy_funcs/llvm-api-tests/vlseg8e8ff.c | 120 +- .../policy_funcs/llvm-api-tests/vlsseg2e16.c | 239 ++- .../policy_funcs/llvm-api-tests/vlsseg2e32.c | 187 +- .../policy_funcs/llvm-api-tests/vlsseg2e64.c | 141 +- .../policy_funcs/llvm-api-tests/vlsseg2e8.c | 180 +- .../policy_funcs/llvm-api-tests/vlsseg3e16.c | 192 +- .../policy_funcs/llvm-api-tests/vlsseg3e32.c | 141 +- .../policy_funcs/llvm-api-tests/vlsseg3e64.c | 94 +- .../policy_funcs/llvm-api-tests/vlsseg3e8.c | 150 +- .../policy_funcs/llvm-api-tests/vlsseg4e16.c | 192 +- .../policy_funcs/llvm-api-tests/vlsseg4e32.c | 141 +- .../policy_funcs/llvm-api-tests/vlsseg4e64.c | 94 +- .../policy_funcs/llvm-api-tests/vlsseg4e8.c | 150 +- .../policy_funcs/llvm-api-tests/vlsseg5e16.c | 145 +- .../policy_funcs/llvm-api-tests/vlsseg5e32.c | 95 +- .../policy_funcs/llvm-api-tests/vlsseg5e64.c | 47 +- .../policy_funcs/llvm-api-tests/vlsseg5e8.c | 120 +- .../policy_funcs/llvm-api-tests/vlsseg6e16.c | 145 +- .../policy_funcs/llvm-api-tests/vlsseg6e32.c | 95 +- .../policy_funcs/llvm-api-tests/vlsseg6e64.c | 47 +- .../policy_funcs/llvm-api-tests/vlsseg6e8.c | 120 +- .../policy_funcs/llvm-api-tests/vlsseg7e16.c | 145 +- .../policy_funcs/llvm-api-tests/vlsseg7e32.c | 95 +- .../policy_funcs/llvm-api-tests/vlsseg7e64.c | 47 +- .../policy_funcs/llvm-api-tests/vlsseg7e8.c | 120 +- .../policy_funcs/llvm-api-tests/vlsseg8e16.c | 145 +- .../policy_funcs/llvm-api-tests/vlsseg8e32.c | 95 +- .../policy_funcs/llvm-api-tests/vlsseg8e64.c | 47 +- .../policy_funcs/llvm-api-tests/vlsseg8e8.c | 120 +- .../policy_funcs/llvm-api-tests/vluxei16.c | 849 ++++++--- .../policy_funcs/llvm-api-tests/vluxei32.c | 776 +++++--- .../policy_funcs/llvm-api-tests/vluxei64.c | 658 +++++-- .../policy_funcs/llvm-api-tests/vluxei8.c | 873 ++++++--- .../llvm-api-tests/vluxseg2ei16.c | 762 ++++++-- .../llvm-api-tests/vluxseg2ei32.c | 732 ++++++-- .../llvm-api-tests/vluxseg2ei64.c | 655 +++++-- .../policy_funcs/llvm-api-tests/vluxseg2ei8.c | 756 ++++++-- .../llvm-api-tests/vluxseg3ei16.c | 591 ++++-- .../llvm-api-tests/vluxseg3ei32.c | 591 ++++-- .../llvm-api-tests/vluxseg3ei64.c | 561 ++++-- .../policy_funcs/llvm-api-tests/vluxseg3ei8.c | 585 ++++-- .../llvm-api-tests/vluxseg4ei16.c | 591 ++++-- .../llvm-api-tests/vluxseg4ei32.c | 591 ++++-- .../llvm-api-tests/vluxseg4ei64.c | 561 ++++-- .../policy_funcs/llvm-api-tests/vluxseg4ei8.c | 585 ++++-- .../llvm-api-tests/vluxseg5ei16.c | 420 +++-- .../llvm-api-tests/vluxseg5ei32.c | 420 +++-- .../llvm-api-tests/vluxseg5ei64.c | 420 +++-- .../policy_funcs/llvm-api-tests/vluxseg5ei8.c | 414 +++-- .../llvm-api-tests/vluxseg6ei16.c | 420 +++-- .../llvm-api-tests/vluxseg6ei32.c | 420 +++-- .../llvm-api-tests/vluxseg6ei64.c | 420 +++-- .../policy_funcs/llvm-api-tests/vluxseg6ei8.c | 414 +++-- .../llvm-api-tests/vluxseg7ei16.c | 420 +++-- .../llvm-api-tests/vluxseg7ei32.c | 420 +++-- .../llvm-api-tests/vluxseg7ei64.c | 420 +++-- .../policy_funcs/llvm-api-tests/vluxseg7ei8.c | 414 +++-- .../llvm-api-tests/vluxseg8ei16.c | 420 +++-- .../llvm-api-tests/vluxseg8ei32.c | 420 +++-- .../llvm-api-tests/vluxseg8ei64.c | 420 +++-- .../policy_funcs/llvm-api-tests/vluxseg8ei8.c | 414 +++-- .../policy_funcs/llvm-api-tests/vmacc.c | 1122 +++++++---- .../policy_funcs/llvm-api-tests/vmadd.c | 1122 +++++++---- .../policy_funcs/llvm-api-tests/vmax.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vmaxu.c | 585 ++++-- .../policy_funcs/llvm-api-tests/vmerge.c | 330 ++-- .../policy_funcs/llvm-api-tests/vmfeq.c | 102 +- .../policy_funcs/llvm-api-tests/vmfge.c | 102 +- .../policy_funcs/llvm-api-tests/vmfgt.c | 102 +- .../policy_funcs/llvm-api-tests/vmfle.c | 102 +- .../policy_funcs/llvm-api-tests/vmflt.c | 102 +- .../policy_funcs/llvm-api-tests/vmfne.c | 102 +- .../policy_funcs/llvm-api-tests/vmin.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vminu.c | 585 ++++-- .../policy_funcs/llvm-api-tests/vmsbf.c | 9 +- .../policy_funcs/llvm-api-tests/vmseq.c | 282 ++- .../policy_funcs/llvm-api-tests/vmsge.c | 135 +- .../policy_funcs/llvm-api-tests/vmsgeu.c | 153 +- .../policy_funcs/llvm-api-tests/vmsgt.c | 135 +- .../policy_funcs/llvm-api-tests/vmsgtu.c | 153 +- .../policy_funcs/llvm-api-tests/vmsif.c | 9 +- .../policy_funcs/llvm-api-tests/vmsle.c | 135 +- .../policy_funcs/llvm-api-tests/vmsleu.c | 153 +- .../policy_funcs/llvm-api-tests/vmslt.c | 135 +- .../policy_funcs/llvm-api-tests/vmsltu.c | 153 +- .../policy_funcs/llvm-api-tests/vmsne.c | 282 ++- .../policy_funcs/llvm-api-tests/vmsof.c | 9 +- .../policy_funcs/llvm-api-tests/vmul.c | 1104 +++++++---- .../policy_funcs/llvm-api-tests/vmulh.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vmulhsu.c | 570 ++++-- .../policy_funcs/llvm-api-tests/vmulhu.c | 603 ++++-- .../policy_funcs/llvm-api-tests/vmv.c | 54 +- .../policy_funcs/llvm-api-tests/vnclip.c | 377 ++-- .../policy_funcs/llvm-api-tests/vnclipu.c | 408 ++-- .../policy_funcs/llvm-api-tests/vncvt.c | 321 ++-- .../policy_funcs/llvm-api-tests/vneg.c | 198 +- .../policy_funcs/llvm-api-tests/vnmsac.c | 1146 ++++++++---- .../policy_funcs/llvm-api-tests/vnmsub.c | 1146 ++++++++---- .../policy_funcs/llvm-api-tests/vnot.c | 405 ++-- .../policy_funcs/llvm-api-tests/vnsra.c | 371 ++-- .../policy_funcs/llvm-api-tests/vnsrl.c | 394 ++-- .../policy_funcs/llvm-api-tests/vor.c | 1083 +++++++---- .../policy_funcs/llvm-api-tests/vredand.c | 308 +++- .../policy_funcs/llvm-api-tests/vredmax.c | 154 +- .../policy_funcs/llvm-api-tests/vredmaxu.c | 154 +- .../policy_funcs/llvm-api-tests/vredmin.c | 154 +- .../policy_funcs/llvm-api-tests/vredminu.c | 154 +- .../policy_funcs/llvm-api-tests/vredor.c | 304 ++- .../policy_funcs/llvm-api-tests/vredsum.c | 308 +++- .../policy_funcs/llvm-api-tests/vredxor.c | 308 +++- .../policy_funcs/llvm-api-tests/vrem.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vremu.c | 585 ++++-- .../policy_funcs/llvm-api-tests/vrgather.c | 1642 ++++++++++++----- .../llvm-api-tests/vrgatherei16.c | 855 ++++++--- .../policy_funcs/llvm-api-tests/vrsub.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vsadd.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vsaddu.c | 603 ++++-- .../policy_funcs/llvm-api-tests/vsbc.c | 267 ++- .../policy_funcs/llvm-api-tests/vsetvl.c | 88 +- .../policy_funcs/llvm-api-tests/vsetvlmax.c | 88 +- .../policy_funcs/llvm-api-tests/vsext_vf2.c | 144 +- .../policy_funcs/llvm-api-tests/vsext_vf4.c | 84 +- .../policy_funcs/llvm-api-tests/vsext_vf8.c | 36 +- .../policy_funcs/llvm-api-tests/vslide1down.c | 641 +++++-- .../policy_funcs/llvm-api-tests/vslide1up.c | 603 ++++-- .../policy_funcs/llvm-api-tests/vslidedown.c | 831 ++++++--- .../policy_funcs/llvm-api-tests/vslideup.c | 783 +++++--- .../policy_funcs/llvm-api-tests/vsll.c | 1098 +++++++---- .../policy_funcs/llvm-api-tests/vsmul.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vsra.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vsrl.c | 561 ++++-- .../policy_funcs/llvm-api-tests/vssra.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vssrl.c | 579 ++++-- .../policy_funcs/llvm-api-tests/vssub.c | 537 ++++-- .../policy_funcs/llvm-api-tests/vssubu.c | 603 ++++-- .../policy_funcs/llvm-api-tests/vsub.c | 1104 +++++++---- .../policy_funcs/llvm-api-tests/vwadd.c | 738 +++++--- .../policy_funcs/llvm-api-tests/vwaddu.c | 841 ++++++--- .../policy_funcs/llvm-api-tests/vwcvt.c | 150 +- .../policy_funcs/llvm-api-tests/vwcvtu.c | 180 +- .../policy_funcs/llvm-api-tests/vwmacc.c | 378 ++-- .../policy_funcs/llvm-api-tests/vwmaccsu.c | 403 ++-- .../policy_funcs/llvm-api-tests/vwmaccu.c | 428 +++-- .../policy_funcs/llvm-api-tests/vwmaccus.c | 192 +- .../policy_funcs/llvm-api-tests/vwmul.c | 369 ++-- .../policy_funcs/llvm-api-tests/vwmulsu.c | 393 ++-- .../policy_funcs/llvm-api-tests/vwmulu.c | 419 +++-- .../policy_funcs/llvm-api-tests/vwredsum.c | 126 +- .../policy_funcs/llvm-api-tests/vwredsumu.c | 126 +- .../policy_funcs/llvm-api-tests/vwsub.c | 738 +++++--- .../policy_funcs/llvm-api-tests/vwsubu.c | 841 ++++++--- .../policy_funcs/llvm-api-tests/vxor.c | 1104 +++++++---- .../policy_funcs/llvm-api-tests/vzext_vf2.c | 171 +- .../policy_funcs/llvm-api-tests/vzext_vf4.c | 102 +- .../policy_funcs/llvm-api-tests/vzext_vf8.c | 45 +- 314 files changed, 86357 insertions(+), 34624 deletions(-) diff --git a/auto-generated/policy_funcs/llvm-api-tests/vaadd.c b/auto-generated/policy_funcs/llvm-api-tests/vaadd.c index b6d6dcc2e..3029151d9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vaadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vaadd.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vaadd_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vaadd_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vaadd_vv_i8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vaadd_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vaadd_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vaadd_vx_i8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vaadd_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vaadd_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vaadd_vv_i8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vaadd_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vaadd_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vaadd_vx_i8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vaadd_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vaadd_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vaadd_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vaadd_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vaadd_vx_i8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vaadd_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vaadd_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vaadd_vv_i8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vaadd_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vaadd_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vaadd_vx_i8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vaadd_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vaadd_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vaadd_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vaadd_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vaadd_vx_i8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vaadd_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vaadd_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vaadd_vv_i8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vaadd_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vaadd_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vaadd_vx_i8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vaadd_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vaadd_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vaadd_vv_i8m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vaadd_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vaadd_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vaadd_vx_i8m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vaadd_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vaadd_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vaadd_vv_i16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vaadd_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vaadd_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vaadd_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vaadd_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vaadd_vv_i16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vaadd_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vaadd_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vaadd_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vaadd_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vaadd_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vaadd_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vaadd_vx_i16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vaadd_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vaadd_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vaadd_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vaadd_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vaadd_vx_i16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vaadd_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vaadd_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vaadd_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vaadd_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vaadd_vx_i16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vaadd_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vaadd_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vaadd_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vaadd_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vaadd_vx_i16m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vaadd_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vaadd_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vaadd_vv_i32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vaadd_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vaadd_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vaadd_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vaadd_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vaadd_vv_i32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vaadd_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vaadd_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vaadd_vx_i32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vaadd_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vaadd_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vaadd_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vaadd_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vaadd_vx_i32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vaadd_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vaadd_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vaadd_vv_i32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vaadd_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vaadd_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vaadd_vx_i32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vaadd_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vaadd_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vaadd_vv_i32m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vaadd_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vaadd_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vaadd_vx_i32m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vaadd_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vaadd_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vaadd_vv_i64m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vaadd_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vaadd_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vaadd_vx_i64m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vaadd_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vaadd_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i64m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vaadd_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vaadd_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vaadd_vx_i64m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vaadd_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vaadd_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vaadd_vv_i64m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vaadd_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vaadd_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vaadd_vx_i64m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vaadd_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vaadd_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vaadd_vv_i64m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vaadd_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vaadd_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vaadd_vx_i64m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vaadd_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vaadd_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vaadd_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vaadd_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vaadd_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vaadd_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vaadd_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vaadd_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vaadd_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vaadd_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vaadd_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vaadd_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vaadd_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vaadd_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vaadd_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vaadd_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vaadd_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vaadd_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vaadd_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vaadd_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vaadd_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vaadd_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vaadd_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vaadd_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vaadd_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vaadd_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vaadd_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vaadd_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vaadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vaadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vaadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vaadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vaadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vaadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vaadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vaadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vaadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vaadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vaadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vaadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vaadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vaadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vaadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vaadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vaadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vaadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vaadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vaadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vaadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vaadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vaadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vaadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vaadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vaadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vaadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vaadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vaadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vaadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vaadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vaadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vaadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vaadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vaadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vaadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vaadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vaadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vaadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vaadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vaadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vaadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vaadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vaadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vaadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vaadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vaadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vaadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vaadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vaadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vaadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vaadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vaadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vaadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vaadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vaadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vaadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vaadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vaadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vaadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vaadd_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vaadd_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vaadd_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vaadd_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vaadd_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vaadd_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vaadd_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vaadd_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vaadd_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vaadd_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vaadd_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vaadd_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vaadd_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vaadd_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vaadd_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vaadd_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vaadd_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vaadd_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vaadd_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vaadd_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vaadd_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vaadd_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vaadd_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vaadd_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vaadd_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vaadd_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vaadd_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vaadd_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vaadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vaadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vaadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vaadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vaadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vaadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vaadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vaadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vaadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vaadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vaadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vaadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vaadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vaadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vaadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vaadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vaadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vaadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vaadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vaadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vaadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vaadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vaadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vaadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vaadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vaadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vaadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vaadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vaadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vaadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vaadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vaadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vaadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vaadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vaadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vaadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vaadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vaadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vaadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vaadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vaadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vaadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vaadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vaadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vaadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vaadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vaadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vaadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vaadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vaadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vaadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vaadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vaadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vaadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vaadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vaadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vaadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vaadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vaadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vaadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vaadd_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vaadd_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vaadd_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vaadd_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vaadd_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vaadd_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vaadd_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vaadd_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vaadd_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vaadd_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vaadd_vv_i8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vaadd_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vaadd_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vaadd_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vaadd_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vaadd_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vaadd_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vaadd_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vaadd_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vaadd_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vaadd_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vaadd_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vaadd_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vaadd_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vaadd_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vaadd_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vaadd_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i8m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vaadd_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vaadd_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vaadd_vx_i8m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vaadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vaadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vaadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vaadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vaadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vaadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vaadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vaadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vaadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vaadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vaadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vaadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vaadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vaadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vaadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vaadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vaadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vaadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vaadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vaadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vaadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vaadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i16m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vaadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vaadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vaadd_vx_i16m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vaadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vaadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vaadd_vv_i32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vaadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vaadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vaadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vaadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vaadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vaadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vaadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vaadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vaadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vaadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vaadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vaadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vaadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vaadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vaadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vaadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i32m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vaadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vaadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vaadd_vx_i32m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vaadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vaadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vaadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vaadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vaadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vaadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vaadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vaadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vaadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vaadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vaadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vaadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vaadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vaadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vaadd_vv_i64m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vaadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vaadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vaadd_vx_i64m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vaaddu.c b/auto-generated/policy_funcs/llvm-api-tests/vaaddu.c index 60f5d6841..b663cefdd 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vaaddu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vaaddu.c @@ -5,706 +5,957 @@ #include -vuint8mf8_t test_vaaddu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vaaddu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vaaddu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vaaddu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vaaddu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vaaddu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vaaddu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vaaddu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vaaddu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vaaddu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vaaddu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vaaddu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vaaddu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vaaddu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vaaddu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vaaddu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vaaddu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vaaddu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vaaddu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vaaddu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vaaddu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vaaddu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vaaddu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vaaddu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vaaddu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vaaddu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vaaddu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vaaddu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u8m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vaaddu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vaaddu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vaaddu_vv_u16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vaaddu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vaaddu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vaaddu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vaaddu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vaaddu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vaaddu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vaaddu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vaaddu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vaaddu_vv_u16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vaaddu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vaaddu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vaaddu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vaaddu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vaaddu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vaaddu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vaaddu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vaaddu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vaaddu_vv_u16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vaaddu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vaaddu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vaaddu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vaaddu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vaaddu_vv_u16m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vaaddu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vaaddu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vaaddu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vaaddu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vaaddu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vaaddu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vaaddu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vaaddu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vaaddu_vv_u32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vaaddu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vaaddu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vaaddu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vaaddu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vaaddu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vaaddu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vaaddu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vaaddu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vaaddu_vv_u32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vaaddu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vaaddu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vaaddu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vaaddu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vaaddu_vv_u32m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vaaddu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vaaddu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vaaddu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vaaddu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vaaddu_vv_u64m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vaaddu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vaaddu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vaaddu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vaaddu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u64m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vaaddu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vaaddu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vaaddu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vaaddu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vaaddu_vv_u64m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vaaddu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vaaddu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vaaddu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vaaddu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vaaddu_vv_u64m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vaaddu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vaaddu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vaaddu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vaaddu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vaaddu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vaaddu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vaaddu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vaaddu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vaaddu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vaaddu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vaaddu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vaaddu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vaaddu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vaaddu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vaaddu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vaaddu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vaaddu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vaaddu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vaaddu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vaaddu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vaaddu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vaaddu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vaaddu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vaaddu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vaaddu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vaaddu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vaaddu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vaaddu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vaaddu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vaaddu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vaaddu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vaaddu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vaaddu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vaaddu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vaaddu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vaaddu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vaaddu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vaaddu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vaaddu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vaaddu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vaaddu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vaaddu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vaaddu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vaaddu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vaaddu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vaaddu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vaaddu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vaaddu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vaaddu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vaaddu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vaaddu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vaaddu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vaaddu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vaaddu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vaaddu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vaaddu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vaaddu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vaaddu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vaaddu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vaaddu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vaaddu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vaaddu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vaaddu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vaaddu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vaaddu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vaaddu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vaaddu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vaaddu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vaaddu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vaaddu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vaaddu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vaaddu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vaaddu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vaaddu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vaaddu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vaaddu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vaaddu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vaaddu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vaaddu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vaaddu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vaaddu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vaaddu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vaaddu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vaaddu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vaaddu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vaaddu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vaaddu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vaaddu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vaaddu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vaaddu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vaaddu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vaaddu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vaaddu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vaaddu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vaaddu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vaaddu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vaaddu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vaaddu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vaaddu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vaaddu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vaaddu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vaaddu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vaaddu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vaaddu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vaaddu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vaaddu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vaaddu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vaaddu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vaaddu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vaaddu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vaaddu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vaaddu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vaaddu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vaaddu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vaaddu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vaaddu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vaaddu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vaaddu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vaaddu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vaaddu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vaaddu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vaaddu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vaaddu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vaaddu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vaaddu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vaaddu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vaaddu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vaaddu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vaaddu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vaaddu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vaaddu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vaaddu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vaaddu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vaaddu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vaaddu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vaaddu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vaaddu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vaaddu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vaaddu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vaaddu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vaaddu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vaaddu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vaaddu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vaaddu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vaaddu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vaaddu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vaaddu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vaaddu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vaaddu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vaaddu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vaaddu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vaaddu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vaaddu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vaaddu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vaaddu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vaaddu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vaaddu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vaaddu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vaaddu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vaaddu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vaaddu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vaaddu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u32m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vaaddu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vaaddu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vaaddu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vaaddu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u64m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vaaddu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vaaddu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vaaddu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vaaddu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u64m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vaaddu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vaaddu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vaaddu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vaaddu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u64m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vaaddu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vaaddu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vaaddu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vaaddu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u64m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vaaddu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vaaddu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vaaddu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vaaddu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vaaddu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vaaddu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vaaddu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vaaddu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vaaddu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vaaddu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vaaddu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vaaddu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vaaddu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vaaddu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vaaddu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vaaddu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vaaddu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vaaddu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vaaddu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vaaddu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vaaddu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vaaddu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vaaddu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vaaddu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vaaddu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vaaddu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vaaddu_vv_u8m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vaaddu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vaaddu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vaaddu_vx_u8m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vaaddu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vaaddu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vaaddu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vaaddu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vaaddu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vaaddu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vaaddu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vaaddu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vaaddu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vaaddu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vaaddu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vaaddu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vaaddu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vaaddu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vaaddu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vaaddu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vaaddu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vaaddu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vaaddu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vaaddu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vaaddu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vaaddu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u16m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vaaddu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vaaddu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vaaddu_vx_u16m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vaaddu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vaaddu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vaaddu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vaaddu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vaaddu_vx_u32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vaaddu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vaaddu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vaaddu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vaaddu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vaaddu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vaaddu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vaaddu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vaaddu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vaaddu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vaaddu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vaaddu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vaaddu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vaaddu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vaaddu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u32m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vaaddu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vaaddu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vaaddu_vx_u32m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vaaddu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vaaddu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vaaddu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vaaddu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vaaddu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vaaddu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vaaddu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vaaddu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vaaddu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vaaddu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vaaddu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vaaddu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vaaddu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vaaddu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vaaddu_vv_u64m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vaaddu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vaaddu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vaaddu_vx_u64m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vadc.c b/auto-generated/policy_funcs/llvm-api-tests/vadc.c index 1d02699f4..64f455393 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vadc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vadc.c @@ -5,354 +5,445 @@ #include -vint8mf8_t test_vadc_vvm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, vbool64_t v0, size_t vl) { +vint8mf8_t test_vadc_vvm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + vbool64_t v0, size_t vl) { return __riscv_vadc_vvm_i8mf8_tu(vd, vs2, vs1, v0, vl); } -vint8mf8_t test_vadc_vxm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, vbool64_t v0, size_t vl) { +vint8mf8_t test_vadc_vxm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + vbool64_t v0, size_t vl) { return __riscv_vadc_vxm_i8mf8_tu(vd, vs2, rs1, v0, vl); } -vint8mf4_t test_vadc_vvm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, vbool32_t v0, size_t vl) { +vint8mf4_t test_vadc_vvm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + vbool32_t v0, size_t vl) { return __riscv_vadc_vvm_i8mf4_tu(vd, vs2, vs1, v0, vl); } -vint8mf4_t test_vadc_vxm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, vbool32_t v0, size_t vl) { +vint8mf4_t test_vadc_vxm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vadc_vxm_i8mf4_tu(vd, vs2, rs1, v0, vl); } -vint8mf2_t test_vadc_vvm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, vbool16_t v0, size_t vl) { +vint8mf2_t test_vadc_vvm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vvm_i8mf2_tu(vd, vs2, vs1, v0, vl); } -vint8mf2_t test_vadc_vxm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, vbool16_t v0, size_t vl) { +vint8mf2_t test_vadc_vxm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vxm_i8mf2_tu(vd, vs2, rs1, v0, vl); } -vint8m1_t test_vadc_vvm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, vbool8_t v0, size_t vl) { +vint8m1_t test_vadc_vvm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vvm_i8m1_tu(vd, vs2, vs1, v0, vl); } -vint8m1_t test_vadc_vxm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, vbool8_t v0, size_t vl) { +vint8m1_t test_vadc_vxm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vxm_i8m1_tu(vd, vs2, rs1, v0, vl); } -vint8m2_t test_vadc_vvm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, vbool4_t v0, size_t vl) { +vint8m2_t test_vadc_vvm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vadc_vvm_i8m2_tu(vd, vs2, vs1, v0, vl); } -vint8m2_t test_vadc_vxm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, vbool4_t v0, size_t vl) { +vint8m2_t test_vadc_vxm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vadc_vxm_i8m2_tu(vd, vs2, rs1, v0, vl); } -vint8m4_t test_vadc_vvm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, vbool2_t v0, size_t vl) { +vint8m4_t test_vadc_vvm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + vbool2_t v0, size_t vl) { return __riscv_vadc_vvm_i8m4_tu(vd, vs2, vs1, v0, vl); } -vint8m4_t test_vadc_vxm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, vbool2_t v0, size_t vl) { +vint8m4_t test_vadc_vxm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vadc_vxm_i8m4_tu(vd, vs2, rs1, v0, vl); } -vint8m8_t test_vadc_vvm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, vbool1_t v0, size_t vl) { +vint8m8_t test_vadc_vvm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + vbool1_t v0, size_t vl) { return __riscv_vadc_vvm_i8m8_tu(vd, vs2, vs1, v0, vl); } -vint8m8_t test_vadc_vxm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, vbool1_t v0, size_t vl) { +vint8m8_t test_vadc_vxm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + vbool1_t v0, size_t vl) { return __riscv_vadc_vxm_i8m8_tu(vd, vs2, rs1, v0, vl); } -vint16mf4_t test_vadc_vvm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, vbool64_t v0, size_t vl) { +vint16mf4_t test_vadc_vvm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vvm_i16mf4_tu(vd, vs2, vs1, v0, vl); } -vint16mf4_t test_vadc_vxm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, vbool64_t v0, size_t vl) { +vint16mf4_t test_vadc_vxm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vxm_i16mf4_tu(vd, vs2, rs1, v0, vl); } -vint16mf2_t test_vadc_vvm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, vbool32_t v0, size_t vl) { +vint16mf2_t test_vadc_vvm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, vbool32_t v0, size_t vl) { return __riscv_vadc_vvm_i16mf2_tu(vd, vs2, vs1, v0, vl); } -vint16mf2_t test_vadc_vxm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, vbool32_t v0, size_t vl) { +vint16mf2_t test_vadc_vxm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, vbool32_t v0, size_t vl) { return __riscv_vadc_vxm_i16mf2_tu(vd, vs2, rs1, v0, vl); } -vint16m1_t test_vadc_vvm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, vbool16_t v0, size_t vl) { +vint16m1_t test_vadc_vvm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vvm_i16m1_tu(vd, vs2, vs1, v0, vl); } -vint16m1_t test_vadc_vxm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, vbool16_t v0, size_t vl) { +vint16m1_t test_vadc_vxm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vxm_i16m1_tu(vd, vs2, rs1, v0, vl); } -vint16m2_t test_vadc_vvm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, vbool8_t v0, size_t vl) { +vint16m2_t test_vadc_vvm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vvm_i16m2_tu(vd, vs2, vs1, v0, vl); } -vint16m2_t test_vadc_vxm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, vbool8_t v0, size_t vl) { +vint16m2_t test_vadc_vxm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vxm_i16m2_tu(vd, vs2, rs1, v0, vl); } -vint16m4_t test_vadc_vvm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, vbool4_t v0, size_t vl) { +vint16m4_t test_vadc_vvm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vadc_vvm_i16m4_tu(vd, vs2, vs1, v0, vl); } -vint16m4_t test_vadc_vxm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, vbool4_t v0, size_t vl) { +vint16m4_t test_vadc_vxm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vadc_vxm_i16m4_tu(vd, vs2, rs1, v0, vl); } -vint16m8_t test_vadc_vvm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, vbool2_t v0, size_t vl) { +vint16m8_t test_vadc_vvm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + vbool2_t v0, size_t vl) { return __riscv_vadc_vvm_i16m8_tu(vd, vs2, vs1, v0, vl); } -vint16m8_t test_vadc_vxm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, vbool2_t v0, size_t vl) { +vint16m8_t test_vadc_vxm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vadc_vxm_i16m8_tu(vd, vs2, rs1, v0, vl); } -vint32mf2_t test_vadc_vvm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, vbool64_t v0, size_t vl) { +vint32mf2_t test_vadc_vvm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vvm_i32mf2_tu(vd, vs2, vs1, v0, vl); } -vint32mf2_t test_vadc_vxm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, vbool64_t v0, size_t vl) { +vint32mf2_t test_vadc_vxm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vxm_i32mf2_tu(vd, vs2, rs1, v0, vl); } -vint32m1_t test_vadc_vvm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, vbool32_t v0, size_t vl) { +vint32m1_t test_vadc_vvm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + vbool32_t v0, size_t vl) { return __riscv_vadc_vvm_i32m1_tu(vd, vs2, vs1, v0, vl); } -vint32m1_t test_vadc_vxm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, vbool32_t v0, size_t vl) { +vint32m1_t test_vadc_vxm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vadc_vxm_i32m1_tu(vd, vs2, rs1, v0, vl); } -vint32m2_t test_vadc_vvm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, vbool16_t v0, size_t vl) { +vint32m2_t test_vadc_vvm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vvm_i32m2_tu(vd, vs2, vs1, v0, vl); } -vint32m2_t test_vadc_vxm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, vbool16_t v0, size_t vl) { +vint32m2_t test_vadc_vxm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vxm_i32m2_tu(vd, vs2, rs1, v0, vl); } -vint32m4_t test_vadc_vvm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, vbool8_t v0, size_t vl) { +vint32m4_t test_vadc_vvm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vvm_i32m4_tu(vd, vs2, vs1, v0, vl); } -vint32m4_t test_vadc_vxm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, vbool8_t v0, size_t vl) { +vint32m4_t test_vadc_vxm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vxm_i32m4_tu(vd, vs2, rs1, v0, vl); } -vint32m8_t test_vadc_vvm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, vbool4_t v0, size_t vl) { +vint32m8_t test_vadc_vvm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vadc_vvm_i32m8_tu(vd, vs2, vs1, v0, vl); } -vint32m8_t test_vadc_vxm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, vbool4_t v0, size_t vl) { +vint32m8_t test_vadc_vxm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vadc_vxm_i32m8_tu(vd, vs2, rs1, v0, vl); } -vint64m1_t test_vadc_vvm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, vbool64_t v0, size_t vl) { +vint64m1_t test_vadc_vvm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + vbool64_t v0, size_t vl) { return __riscv_vadc_vvm_i64m1_tu(vd, vs2, vs1, v0, vl); } -vint64m1_t test_vadc_vxm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, vbool64_t v0, size_t vl) { +vint64m1_t test_vadc_vxm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + vbool64_t v0, size_t vl) { return __riscv_vadc_vxm_i64m1_tu(vd, vs2, rs1, v0, vl); } -vint64m2_t test_vadc_vvm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, vbool32_t v0, size_t vl) { +vint64m2_t test_vadc_vvm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + vbool32_t v0, size_t vl) { return __riscv_vadc_vvm_i64m2_tu(vd, vs2, vs1, v0, vl); } -vint64m2_t test_vadc_vxm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, vbool32_t v0, size_t vl) { +vint64m2_t test_vadc_vxm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vadc_vxm_i64m2_tu(vd, vs2, rs1, v0, vl); } -vint64m4_t test_vadc_vvm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, vbool16_t v0, size_t vl) { +vint64m4_t test_vadc_vvm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vvm_i64m4_tu(vd, vs2, vs1, v0, vl); } -vint64m4_t test_vadc_vxm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, vbool16_t v0, size_t vl) { +vint64m4_t test_vadc_vxm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vxm_i64m4_tu(vd, vs2, rs1, v0, vl); } -vint64m8_t test_vadc_vvm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, vbool8_t v0, size_t vl) { +vint64m8_t test_vadc_vvm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vvm_i64m8_tu(vd, vs2, vs1, v0, vl); } -vint64m8_t test_vadc_vxm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, vbool8_t v0, size_t vl) { +vint64m8_t test_vadc_vxm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vxm_i64m8_tu(vd, vs2, rs1, v0, vl); } -vuint8mf8_t test_vadc_vvm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, vbool64_t v0, size_t vl) { +vuint8mf8_t test_vadc_vvm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vvm_u8mf8_tu(vd, vs2, vs1, v0, vl); } -vuint8mf8_t test_vadc_vxm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, vbool64_t v0, size_t vl) { +vuint8mf8_t test_vadc_vxm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + vbool64_t v0, size_t vl) { return __riscv_vadc_vxm_u8mf8_tu(vd, vs2, rs1, v0, vl); } -vuint8mf4_t test_vadc_vvm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, vbool32_t v0, size_t vl) { +vuint8mf4_t test_vadc_vvm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, vbool32_t v0, size_t vl) { return __riscv_vadc_vvm_u8mf4_tu(vd, vs2, vs1, v0, vl); } -vuint8mf4_t test_vadc_vxm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, vbool32_t v0, size_t vl) { +vuint8mf4_t test_vadc_vxm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vadc_vxm_u8mf4_tu(vd, vs2, rs1, v0, vl); } -vuint8mf2_t test_vadc_vvm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, vbool16_t v0, size_t vl) { +vuint8mf2_t test_vadc_vvm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, vbool16_t v0, size_t vl) { return __riscv_vadc_vvm_u8mf2_tu(vd, vs2, vs1, v0, vl); } -vuint8mf2_t test_vadc_vxm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, vbool16_t v0, size_t vl) { +vuint8mf2_t test_vadc_vxm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vadc_vxm_u8mf2_tu(vd, vs2, rs1, v0, vl); } -vuint8m1_t test_vadc_vvm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, vbool8_t v0, size_t vl) { +vuint8m1_t test_vadc_vvm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vvm_u8m1_tu(vd, vs2, vs1, v0, vl); } -vuint8m1_t test_vadc_vxm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, vbool8_t v0, size_t vl) { +vuint8m1_t test_vadc_vxm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vadc_vxm_u8m1_tu(vd, vs2, rs1, v0, vl); } -vuint8m2_t test_vadc_vvm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, vbool4_t v0, size_t vl) { +vuint8m2_t test_vadc_vvm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vadc_vvm_u8m2_tu(vd, vs2, vs1, v0, vl); } -vuint8m2_t test_vadc_vxm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, vbool4_t v0, size_t vl) { +vuint8m2_t test_vadc_vxm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vadc_vxm_u8m2_tu(vd, vs2, rs1, v0, vl); } -vuint8m4_t test_vadc_vvm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, vbool2_t v0, size_t vl) { +vuint8m4_t test_vadc_vvm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + vbool2_t v0, size_t vl) { return __riscv_vadc_vvm_u8m4_tu(vd, vs2, vs1, v0, vl); } -vuint8m4_t test_vadc_vxm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, vbool2_t v0, size_t vl) { +vuint8m4_t test_vadc_vxm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vadc_vxm_u8m4_tu(vd, vs2, rs1, v0, vl); } -vuint8m8_t test_vadc_vvm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, vbool1_t v0, size_t vl) { +vuint8m8_t test_vadc_vvm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + vbool1_t v0, size_t vl) { return __riscv_vadc_vvm_u8m8_tu(vd, vs2, vs1, v0, vl); } -vuint8m8_t test_vadc_vxm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, vbool1_t v0, size_t vl) { +vuint8m8_t test_vadc_vxm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + vbool1_t v0, size_t vl) { return __riscv_vadc_vxm_u8m8_tu(vd, vs2, rs1, v0, vl); } -vuint16mf4_t test_vadc_vvm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, vbool64_t v0, size_t vl) { +vuint16mf4_t test_vadc_vvm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vadc_vvm_u16mf4_tu(vd, vs2, vs1, v0, vl); } -vuint16mf4_t test_vadc_vxm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, vbool64_t v0, size_t vl) { +vuint16mf4_t test_vadc_vxm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vxm_u16mf4_tu(vd, vs2, rs1, v0, vl); } -vuint16mf2_t test_vadc_vvm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, vbool32_t v0, size_t vl) { +vuint16mf2_t test_vadc_vvm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, vbool32_t v0, + size_t vl) { return __riscv_vadc_vvm_u16mf2_tu(vd, vs2, vs1, v0, vl); } -vuint16mf2_t test_vadc_vxm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, vbool32_t v0, size_t vl) { +vuint16mf2_t test_vadc_vxm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, vbool32_t v0, size_t vl) { return __riscv_vadc_vxm_u16mf2_tu(vd, vs2, rs1, v0, vl); } -vuint16m1_t test_vadc_vvm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, vbool16_t v0, size_t vl) { +vuint16m1_t test_vadc_vvm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, vbool16_t v0, size_t vl) { return __riscv_vadc_vvm_u16m1_tu(vd, vs2, vs1, v0, vl); } -vuint16m1_t test_vadc_vxm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, vbool16_t v0, size_t vl) { +vuint16m1_t test_vadc_vxm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, vbool16_t v0, size_t vl) { return __riscv_vadc_vxm_u16m1_tu(vd, vs2, rs1, v0, vl); } -vuint16m2_t test_vadc_vvm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, vbool8_t v0, size_t vl) { +vuint16m2_t test_vadc_vvm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, vbool8_t v0, size_t vl) { return __riscv_vadc_vvm_u16m2_tu(vd, vs2, vs1, v0, vl); } -vuint16m2_t test_vadc_vxm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, vbool8_t v0, size_t vl) { +vuint16m2_t test_vadc_vxm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, vbool8_t v0, size_t vl) { return __riscv_vadc_vxm_u16m2_tu(vd, vs2, rs1, v0, vl); } -vuint16m4_t test_vadc_vvm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, vbool4_t v0, size_t vl) { +vuint16m4_t test_vadc_vvm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, vbool4_t v0, size_t vl) { return __riscv_vadc_vvm_u16m4_tu(vd, vs2, vs1, v0, vl); } -vuint16m4_t test_vadc_vxm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, vbool4_t v0, size_t vl) { +vuint16m4_t test_vadc_vxm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, vbool4_t v0, size_t vl) { return __riscv_vadc_vxm_u16m4_tu(vd, vs2, rs1, v0, vl); } -vuint16m8_t test_vadc_vvm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, vbool2_t v0, size_t vl) { +vuint16m8_t test_vadc_vvm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, vbool2_t v0, size_t vl) { return __riscv_vadc_vvm_u16m8_tu(vd, vs2, vs1, v0, vl); } -vuint16m8_t test_vadc_vxm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, vbool2_t v0, size_t vl) { +vuint16m8_t test_vadc_vxm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, vbool2_t v0, size_t vl) { return __riscv_vadc_vxm_u16m8_tu(vd, vs2, rs1, v0, vl); } -vuint32mf2_t test_vadc_vvm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, vbool64_t v0, size_t vl) { +vuint32mf2_t test_vadc_vvm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vadc_vvm_u32mf2_tu(vd, vs2, vs1, v0, vl); } -vuint32mf2_t test_vadc_vxm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, vbool64_t v0, size_t vl) { +vuint32mf2_t test_vadc_vxm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vxm_u32mf2_tu(vd, vs2, rs1, v0, vl); } -vuint32m1_t test_vadc_vvm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, vbool32_t v0, size_t vl) { +vuint32m1_t test_vadc_vvm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, vbool32_t v0, size_t vl) { return __riscv_vadc_vvm_u32m1_tu(vd, vs2, vs1, v0, vl); } -vuint32m1_t test_vadc_vxm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, vbool32_t v0, size_t vl) { +vuint32m1_t test_vadc_vxm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, vbool32_t v0, size_t vl) { return __riscv_vadc_vxm_u32m1_tu(vd, vs2, rs1, v0, vl); } -vuint32m2_t test_vadc_vvm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, vbool16_t v0, size_t vl) { +vuint32m2_t test_vadc_vvm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, vbool16_t v0, size_t vl) { return __riscv_vadc_vvm_u32m2_tu(vd, vs2, vs1, v0, vl); } -vuint32m2_t test_vadc_vxm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, vbool16_t v0, size_t vl) { +vuint32m2_t test_vadc_vxm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, vbool16_t v0, size_t vl) { return __riscv_vadc_vxm_u32m2_tu(vd, vs2, rs1, v0, vl); } -vuint32m4_t test_vadc_vvm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, vbool8_t v0, size_t vl) { +vuint32m4_t test_vadc_vvm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, vbool8_t v0, size_t vl) { return __riscv_vadc_vvm_u32m4_tu(vd, vs2, vs1, v0, vl); } -vuint32m4_t test_vadc_vxm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, vbool8_t v0, size_t vl) { +vuint32m4_t test_vadc_vxm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, vbool8_t v0, size_t vl) { return __riscv_vadc_vxm_u32m4_tu(vd, vs2, rs1, v0, vl); } -vuint32m8_t test_vadc_vvm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, vbool4_t v0, size_t vl) { +vuint32m8_t test_vadc_vvm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, vbool4_t v0, size_t vl) { return __riscv_vadc_vvm_u32m8_tu(vd, vs2, vs1, v0, vl); } -vuint32m8_t test_vadc_vxm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, vbool4_t v0, size_t vl) { +vuint32m8_t test_vadc_vxm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, vbool4_t v0, size_t vl) { return __riscv_vadc_vxm_u32m8_tu(vd, vs2, rs1, v0, vl); } -vuint64m1_t test_vadc_vvm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, vbool64_t v0, size_t vl) { +vuint64m1_t test_vadc_vvm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vvm_u64m1_tu(vd, vs2, vs1, v0, vl); } -vuint64m1_t test_vadc_vxm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, vbool64_t v0, size_t vl) { +vuint64m1_t test_vadc_vxm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, vbool64_t v0, size_t vl) { return __riscv_vadc_vxm_u64m1_tu(vd, vs2, rs1, v0, vl); } -vuint64m2_t test_vadc_vvm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, vbool32_t v0, size_t vl) { +vuint64m2_t test_vadc_vvm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, vbool32_t v0, size_t vl) { return __riscv_vadc_vvm_u64m2_tu(vd, vs2, vs1, v0, vl); } -vuint64m2_t test_vadc_vxm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, vbool32_t v0, size_t vl) { +vuint64m2_t test_vadc_vxm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, vbool32_t v0, size_t vl) { return __riscv_vadc_vxm_u64m2_tu(vd, vs2, rs1, v0, vl); } -vuint64m4_t test_vadc_vvm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, vbool16_t v0, size_t vl) { +vuint64m4_t test_vadc_vvm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, vbool16_t v0, size_t vl) { return __riscv_vadc_vvm_u64m4_tu(vd, vs2, vs1, v0, vl); } -vuint64m4_t test_vadc_vxm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, vbool16_t v0, size_t vl) { +vuint64m4_t test_vadc_vxm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, vbool16_t v0, size_t vl) { return __riscv_vadc_vxm_u64m4_tu(vd, vs2, rs1, v0, vl); } -vuint64m8_t test_vadc_vvm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, vbool8_t v0, size_t vl) { +vuint64m8_t test_vadc_vvm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, vbool8_t v0, size_t vl) { return __riscv_vadc_vvm_u64m8_tu(vd, vs2, vs1, v0, vl); } -vuint64m8_t test_vadc_vxm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, vbool8_t v0, size_t vl) { +vuint64m8_t test_vadc_vxm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, vbool8_t v0, size_t vl) { return __riscv_vadc_vxm_u64m8_tu(vd, vs2, rs1, v0, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vadd.c b/auto-generated/policy_funcs/llvm-api-tests/vadd.c index 24042676a..69a25c0c7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vadd.c @@ -5,1410 +5,1810 @@ #include -vint8mf8_t test_vadd_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vadd_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vadd_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vadd_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vadd_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vadd_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vadd_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vadd_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vadd_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vadd_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vadd_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vadd_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vadd_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vadd_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vadd_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vadd_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vadd_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vadd_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vadd_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vadd_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vadd_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vadd_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vadd_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vadd_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vadd_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vadd_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vadd_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vadd_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vadd_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vadd_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vadd_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vadd_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vadd_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vadd_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vadd_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vadd_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vadd_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vadd_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vadd_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vadd_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vadd_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vadd_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vadd_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vadd_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vadd_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vadd_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vadd_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vadd_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vadd_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vadd_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vadd_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vadd_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vadd_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vadd_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vadd_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vadd_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vadd_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vadd_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vadd_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vadd_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vadd_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vadd_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vadd_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vadd_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vadd_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vadd_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vadd_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vadd_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vadd_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vadd_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vadd_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vadd_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vadd_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vadd_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vadd_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vadd_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vadd_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vadd_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vadd_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vadd_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vadd_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vadd_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vadd_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vadd_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vadd_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vadd_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vadd_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vadd_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vadd_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vadd_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vadd_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vadd_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vadd_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vadd_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vadd_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vadd_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vadd_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vadd_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vadd_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vadd_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vadd_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vadd_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vadd_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vadd_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vadd_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vadd_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vadd_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vadd_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vadd_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vadd_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vadd_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vadd_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vadd_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vadd_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vadd_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vadd_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vadd_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vadd_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vadd_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vadd_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vadd_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vadd_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vadd_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vadd_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vadd_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vadd_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vadd_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vadd_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vadd_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vadd_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vadd_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vadd_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vadd_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vadd_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vadd_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vadd_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vadd_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vadd_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vadd_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vadd_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vadd_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vadd_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vadd_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vadd_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vadd_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vadd_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vadd_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vadd_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vadd_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vadd_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vadd_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vadd_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vadd_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vadd_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vadd_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vadd_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vadd_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vadd_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vadd_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vadd_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vadd_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vadd_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vadd_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vadd_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vadd_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vadd_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vadd_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vadd_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vadd_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vadd_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vadd_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vadd_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vadd_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vadd_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vadd_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vadd_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vadd_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vadd_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vadd_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vadd_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vadd_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vadd_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vadd_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vadd_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vadd_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vadd_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vadd_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vadd_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vadd_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vadd_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vadd_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vadd_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vadd_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vadd_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vadd_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vadd_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vadd_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vadd_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vadd_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vadd_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vadd_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vadd_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vadd_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vadd_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vadd_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vadd_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vadd_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vadd_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vadd_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vadd_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vadd_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vadd_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vadd_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vadd_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vadd_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vadd_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vadd_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vadd_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vadd_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vadd_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vadd_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vadd_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vadd_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vadd_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vadd_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vadd_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vadd_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vadd_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vadd_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vadd_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vadd_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vadd_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vadd_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vadd_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vadd_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vadd_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vadd_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vadd_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vadd_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vadd_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vadd_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vadd_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vadd_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vadd_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vadd_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vadd_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vadd_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vadd_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vadd_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vadd_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vadd_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vadd_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vadd_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vadd_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vadd_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vadd_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vadd_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vadd_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vadd_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vadd_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vadd_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vadd_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vadd_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vadd_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vadd_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vadd_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vadd_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vadd_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vadd_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vadd_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vadd_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vadd_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vadd_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vadd_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vadd_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vadd_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vadd_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vadd_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vadd_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vadd_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vadd_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vadd_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vadd_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vadd_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vadd_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vadd_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vadd_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vadd_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vadd_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vadd_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vadd_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vadd_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vadd_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vadd_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vadd_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vadd_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vadd_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vadd_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vadd_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vadd_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vadd_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vadd_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vadd_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vadd_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vadd_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vadd_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vadd_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vadd_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vadd_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vadd_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vadd_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vadd_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vadd_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vadd_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vadd_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vadd_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vadd_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vadd_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vadd_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vadd_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vadd_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vadd_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vadd_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vadd_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vadd_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vadd_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vadd_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vadd_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vadd_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vadd_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vadd_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vadd_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vadd_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vadd_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vadd_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vadd_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vadd_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vadd_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vadd_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vadd_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vadd_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vadd_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vadd_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vadd_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vadd_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vadd_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vadd_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vadd_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vadd_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vadd_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vadd_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vadd_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vadd_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vadd_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vadd_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vadd_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vadd_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vadd_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vadd_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vadd_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vadd_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vadd_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vadd_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vadd_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vadd_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vadd_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vadd_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vadd_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vadd_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vadd_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vadd_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vadd_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vadd_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vadd_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vadd_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vadd_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vadd_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vadd_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vadd_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vadd_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vadd_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vadd_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vadd_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vadd_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vadd_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vadd_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vadd_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vadd_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vadd_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vadd_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vadd_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vadd_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vadd_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vadd_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vadd_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vadd_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vadd_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vadd_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vadd_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vadd_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vadd_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vadd_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vadd_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vadd_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vadd_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vadd_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vadd_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vadd_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vadd_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vadd_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vadd_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vadd_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vadd_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vadd_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vadd_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vadd_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vadd_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vadd_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vadd_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vadd_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vadd_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vadd_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vadd_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vadd_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vadd_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vadd_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vadd_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vadd_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vadd_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vadd_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vadd_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vadd_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vadd_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vadd_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vadd_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vadd_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vadd_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vadd_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vadd_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vadd_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vadd_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vadd_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vadd_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vadd_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vadd_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vadd_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vadd_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vadd_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vadd_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vadd_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vadd_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vadd_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vadd_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vadd_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vadd_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vadd_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vadd_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vadd_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vadd_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vadd_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vadd_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vadd_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vadd_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vadd_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vadd_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vadd_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vadd_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vadd_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vadd_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vadd_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vadd_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vadd_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vadd_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vadd_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vadd_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vadd_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vadd_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vadd_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vadd_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vadd_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vadd_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vadd_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vadd_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vadd_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vadd_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vadd_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vadd_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vadd_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vadd_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vadd_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vadd_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vadd_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vadd_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vadd_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vadd_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vadd_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vadd_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vadd_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vadd_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vadd_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vadd_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vadd_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vadd_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vadd_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vadd_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vadd_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vadd_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vadd_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vadd_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vadd_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vadd_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vadd_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vadd_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vadd_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vadd_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vadd_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vadd_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vadd_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vadd_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vadd_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vadd_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vadd_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vadd_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vadd_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vadd_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vadd_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vadd_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vadd_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vadd_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vadd_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vadd_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vadd_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vadd_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vadd_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vadd_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vadd_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vadd_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vadd_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vadd_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vadd_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vadd_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vadd_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vadd_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vadd_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vadd_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vadd_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vadd_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vadd_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vadd_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vadd_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vadd_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vadd_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vadd_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vadd_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vadd_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vadd_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vadd_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vadd_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vadd_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vadd_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vadd_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vadd_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vadd_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vadd_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vadd_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vadd_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vadd_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vadd_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vadd_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vadd_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vadd_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vadd_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vadd_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vadd_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vadd_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vadd_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vadd_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vadd_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vadd_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vadd_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vadd_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vadd_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vadd_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vadd_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vadd_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vadd_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vadd_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vadd_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vadd_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vadd_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vadd_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vadd_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vadd_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vadd_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vadd_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vadd_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vadd_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vadd_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vadd_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vadd_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vadd_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vadd_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vadd_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vadd_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vadd_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vadd_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vadd_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vadd_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vadd_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vadd_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vadd_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vadd_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vadd_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vadd_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vadd_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vadd_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vadd_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vadd_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vadd_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vadd_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vadd_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vadd_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vadd_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vadd_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vadd_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vadd_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vadd_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vadd_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vadd_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vadd_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vadd_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vadd_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vadd_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vadd_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vadd_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vadd_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vadd_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vadd_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vadd_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vadd_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vadd_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vadd_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vadd_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vadd_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vadd_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vadd_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vadd_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vadd_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vadd_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vadd_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vadd_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vadd_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vadd_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vadd_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vadd_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vadd_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vadd_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vadd_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vadd_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vadd_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vadd_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vadd_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vadd_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vadd_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vadd_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vadd_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vadd_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vadd_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vadd_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vadd_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vadd_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vadd_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vadd_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vadd_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vadd_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vadd_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vadd_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vadd_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vadd_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vadd_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vadd_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vadd_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vadd_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vadd_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vadd_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vadd_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vadd_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vadd_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vadd_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vadd_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vadd_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vadd_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vadd_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vadd_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vadd_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vadd_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vadd_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vadd_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vadd_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vadd_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vadd_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vadd_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vadd_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vadd_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vadd_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vadd_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vadd_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vadd_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vadd_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vadd_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vadd_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vadd_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vadd_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vadd_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vadd_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vadd_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vadd_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vadd_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vadd_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vadd_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vadd_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vadd_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vadd_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vadd_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vadd_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vand.c b/auto-generated/policy_funcs/llvm-api-tests/vand.c index 97cfcb672..ba442c905 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vand.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vand.c @@ -5,1410 +5,1810 @@ #include -vint8mf8_t test_vand_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vand_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vand_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vand_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vand_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vand_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vand_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vand_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vand_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vand_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vand_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vand_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vand_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vand_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vand_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vand_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vand_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vand_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vand_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vand_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vand_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vand_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vand_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vand_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vand_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vand_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vand_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vand_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vand_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vand_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vand_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vand_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vand_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vand_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vand_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vand_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vand_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vand_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vand_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vand_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vand_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vand_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vand_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vand_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vand_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vand_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vand_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vand_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vand_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vand_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vand_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vand_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vand_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vand_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vand_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vand_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vand_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vand_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vand_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vand_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vand_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vand_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vand_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vand_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vand_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vand_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vand_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vand_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vand_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vand_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vand_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vand_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vand_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vand_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vand_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vand_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vand_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vand_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vand_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vand_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vand_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vand_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vand_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vand_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vand_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vand_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vand_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vand_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vand_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vand_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vand_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vand_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vand_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vand_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vand_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vand_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vand_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vand_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vand_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vand_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vand_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vand_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vand_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vand_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vand_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vand_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vand_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vand_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vand_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vand_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vand_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vand_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vand_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vand_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vand_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vand_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vand_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vand_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vand_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vand_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vand_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vand_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vand_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vand_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vand_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vand_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vand_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vand_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vand_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vand_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vand_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vand_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vand_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vand_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vand_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vand_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vand_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vand_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vand_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vand_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vand_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vand_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vand_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vand_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vand_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vand_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vand_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vand_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vand_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vand_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vand_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vand_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vand_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vand_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vand_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vand_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vand_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vand_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vand_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vand_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vand_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vand_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vand_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vand_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vand_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vand_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vand_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vand_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vand_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vand_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vand_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vand_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vand_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vand_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vand_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vand_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vand_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vand_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vand_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vand_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vand_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vand_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vand_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vand_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vand_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vand_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vand_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vand_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vand_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vand_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vand_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vand_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vand_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vand_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vand_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vand_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vand_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vand_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vand_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vand_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vand_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vand_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vand_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vand_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vand_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vand_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vand_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vand_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vand_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vand_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vand_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vand_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vand_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vand_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vand_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vand_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vand_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vand_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vand_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vand_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vand_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vand_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vand_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vand_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vand_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vand_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vand_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vand_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vand_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vand_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vand_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vand_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vand_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vand_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vand_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vand_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vand_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vand_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vand_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vand_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vand_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vand_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vand_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vand_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vand_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vand_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vand_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vand_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vand_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vand_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vand_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vand_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vand_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vand_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vand_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vand_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vand_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vand_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vand_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vand_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vand_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vand_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vand_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vand_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vand_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vand_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vand_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vand_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vand_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vand_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vand_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vand_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vand_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vand_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vand_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vand_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vand_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vand_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vand_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vand_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vand_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vand_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vand_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vand_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vand_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vand_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vand_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vand_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vand_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vand_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vand_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vand_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vand_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vand_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vand_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vand_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vand_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vand_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vand_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vand_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vand_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vand_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vand_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vand_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vand_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vand_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vand_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vand_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vand_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vand_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vand_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vand_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vand_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vand_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vand_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vand_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vand_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vand_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vand_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vand_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vand_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vand_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vand_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vand_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vand_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vand_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vand_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vand_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vand_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vand_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vand_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vand_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vand_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vand_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vand_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vand_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vand_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vand_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vand_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vand_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vand_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vand_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vand_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vand_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vand_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vand_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vand_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vand_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vand_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vand_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vand_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vand_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vand_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vand_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vand_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vand_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vand_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vand_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vand_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vand_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vand_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vand_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vand_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vand_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vand_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vand_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vand_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vand_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vand_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vand_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vand_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vand_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vand_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vand_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vand_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vand_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vand_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vand_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vand_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vand_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vand_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vand_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vand_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vand_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vand_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vand_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vand_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vand_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vand_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vand_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vand_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vand_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vand_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vand_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vand_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vand_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vand_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vand_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vand_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vand_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vand_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vand_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vand_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vand_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vand_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vand_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vand_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vand_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vand_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vand_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vand_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vand_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vand_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vand_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vand_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vand_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vand_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vand_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vand_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vand_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vand_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vand_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vand_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vand_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vand_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vand_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vand_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vand_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vand_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vand_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vand_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vand_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vand_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vand_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vand_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vand_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vand_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vand_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vand_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vand_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vand_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vand_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vand_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vand_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vand_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vand_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vand_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vand_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vand_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vand_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vand_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vand_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vand_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vand_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vand_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vand_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vand_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vand_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vand_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vand_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vand_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vand_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vand_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vand_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vand_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vand_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vand_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vand_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vand_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vand_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vand_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vand_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vand_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vand_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vand_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vand_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vand_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vand_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vand_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vand_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vand_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vand_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vand_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vand_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vand_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vand_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vand_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vand_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vand_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vand_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vand_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vand_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vand_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vand_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vand_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vand_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vand_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vand_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vand_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vand_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vand_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vand_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vand_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vand_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vand_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vand_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vand_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vand_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vand_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vand_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vand_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vand_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vand_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vand_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vand_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vand_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vand_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vand_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vand_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vand_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vand_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vand_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vand_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vand_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vand_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vand_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vand_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vand_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vand_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vand_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vand_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vand_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vand_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vand_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vand_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vand_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vand_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vand_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vand_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vand_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vand_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vand_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vand_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vand_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vand_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vand_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vand_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vand_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vand_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vand_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vand_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vand_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vand_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vand_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vand_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vand_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vand_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vand_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vand_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vand_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vand_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vand_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vand_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vand_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vand_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vand_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vand_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vand_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vand_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vand_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vand_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vand_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vand_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vand_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vand_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vand_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vand_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vand_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vand_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vand_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vand_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vand_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vand_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vand_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vand_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vand_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vand_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vand_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vand_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vand_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vand_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vand_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vand_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vand_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vand_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vand_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vand_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vand_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vand_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vand_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vand_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vand_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vand_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vand_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vand_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vand_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vand_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vand_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vand_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vand_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vand_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vand_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vand_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vand_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vand_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vand_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vand_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vand_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vand_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vand_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vand_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vand_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vand_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vand_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vand_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vand_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vand_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vand_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vand_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vand_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vand_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vand_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vand_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vand_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vand_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vand_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vand_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vand_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vand_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vand_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vand_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vand_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vand_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vand_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vand_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vand_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vand_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vand_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vand_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vand_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vand_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vand_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vand_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vand_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vand_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vand_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vand_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vand_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vand_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vand_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vand_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vand_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vand_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vand_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vand_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vand_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vand_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vand_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vand_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vand_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vand_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vand_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vand_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vand_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vand_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vand_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vand_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vand_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vand_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vand_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vand_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vand_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vand_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vand_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vand_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vand_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vand_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vand_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vand_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vand_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vand_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vand_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vand_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vand_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vand_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vand_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vand_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vand_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vand_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vand_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vand_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vand_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vand_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vand_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vand_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vand_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vand_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vand_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vand_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vand_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vand_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vand_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vand_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vand_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vand_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vand_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vand_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vand_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vand_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vand_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vand_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vand_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vand_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vand_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vand_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vand_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vand_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vand_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vand_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vand_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vand_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vand_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vand_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vand_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vand_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vand_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vand_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vand_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vand_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vand_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vand_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vand_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vand_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vand_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vand_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vand_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vand_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vand_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vand_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vand_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vand_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vand_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vand_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vand_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vand_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vand_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vand_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vand_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vand_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vand_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vand_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vand_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vand_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vand_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vand_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vand_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vand_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vand_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vand_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vand_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vand_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vand_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vand_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vand_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vand_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vand_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vand_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vand_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vand_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vand_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vand_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vand_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vand_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vand_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vand_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vand_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vand_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vand_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vand_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vand_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vand_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vand_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vand_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vand_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vand_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vand_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vand_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vand_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vand_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vand_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vand_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vand_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vand_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vand_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vand_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vand_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vand_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vand_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vand_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vand_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vand_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vand_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vand_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vand_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vand_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vand_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vand_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vand_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vand_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vand_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vand_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vand_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vand_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vand_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vand_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vand_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vand_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vand_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vand_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vand_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vand_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vand_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vand_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vand_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vand_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vand_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vand_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vand_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vand_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vand_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vand_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vand_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vand_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vand_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vand_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vand_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vand_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vand_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vand_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vand_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vand_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vand_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vand_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vand_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vand_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vand_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vand_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vand_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vand_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vand_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vand_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vand_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vand_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vand_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vand_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vand_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vand_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vand_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vand_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vand_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vand_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vand_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vand_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vand_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vand_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vand_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vand_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vand_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vand_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vand_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vand_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vand_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vand_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vand_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vand_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vand_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vand_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vand_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vand_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vand_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vand_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vand_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vand_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vand_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vand_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vand_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vand_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vand_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vand_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vand_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vand_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vand_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vand_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vand_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vand_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vand_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vand_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vand_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vand_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vand_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vand_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vand_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vand_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vand_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vand_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vand_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vand_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vand_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vand_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vand_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vand_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vand_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vand_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vand_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vand_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vand_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vand_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vand_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vand_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vand_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vand_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vand_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vand_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vand_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vand_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vand_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vand_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vand_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vand_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vand_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vand_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vand_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vand_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vand_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vasub.c b/auto-generated/policy_funcs/llvm-api-tests/vasub.c index 6c53199b5..d2f1cc0b9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vasub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vasub.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vasub_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vasub_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vasub_vv_i8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vasub_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vasub_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vasub_vx_i8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vasub_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vasub_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vasub_vv_i8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vasub_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vasub_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vasub_vx_i8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vasub_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vasub_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vasub_vv_i8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vasub_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vasub_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vasub_vx_i8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vasub_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vasub_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vasub_vv_i8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vasub_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vasub_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vasub_vx_i8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vasub_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vasub_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vasub_vv_i8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vasub_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vasub_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vasub_vx_i8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vasub_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vasub_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vasub_vv_i8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vasub_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vasub_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vasub_vx_i8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vasub_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vasub_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vasub_vv_i8m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vasub_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vasub_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vasub_vx_i8m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vasub_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vasub_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vasub_vv_i16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vasub_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vasub_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vasub_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vasub_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vasub_vv_i16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vasub_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vasub_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vasub_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vasub_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vasub_vv_i16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vasub_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vasub_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vasub_vx_i16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vasub_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vasub_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vasub_vv_i16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vasub_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vasub_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vasub_vx_i16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vasub_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vasub_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vasub_vv_i16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vasub_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vasub_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vasub_vx_i16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vasub_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vasub_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vasub_vv_i16m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vasub_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vasub_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vasub_vx_i16m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vasub_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vasub_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vasub_vv_i32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vasub_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vasub_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vasub_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vasub_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vasub_vv_i32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vasub_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vasub_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vasub_vx_i32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vasub_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vasub_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vasub_vv_i32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vasub_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vasub_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vasub_vx_i32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vasub_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vasub_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vasub_vv_i32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vasub_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vasub_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vasub_vx_i32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vasub_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vasub_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vasub_vv_i32m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vasub_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vasub_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vasub_vx_i32m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vasub_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vasub_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vasub_vv_i64m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vasub_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vasub_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vasub_vx_i64m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vasub_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vasub_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vasub_vv_i64m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vasub_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vasub_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vasub_vx_i64m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vasub_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vasub_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vasub_vv_i64m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vasub_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vasub_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vasub_vx_i64m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vasub_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vasub_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vasub_vv_i64m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vasub_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vasub_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vasub_vx_i64m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vasub_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vasub_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vasub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vasub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vasub_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vasub_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vasub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vasub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vasub_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vasub_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vasub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vasub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vasub_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vasub_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vasub_vv_i8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vasub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vasub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vasub_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vasub_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vasub_vv_i8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vasub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vasub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vasub_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vasub_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vasub_vv_i8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vasub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vasub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vasub_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vasub_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vasub_vv_i8m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vasub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vasub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vasub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vasub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vasub_vv_i16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vasub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vasub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vasub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vasub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vasub_vv_i16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vasub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vasub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vasub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vasub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vasub_vv_i16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vasub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vasub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vasub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vasub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vasub_vv_i16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vasub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vasub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vasub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vasub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vasub_vv_i16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vasub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vasub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vasub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vasub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vasub_vv_i16m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vasub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vasub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vasub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vasub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vasub_vv_i32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vasub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vasub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vasub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vasub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vasub_vv_i32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vasub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vasub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vasub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vasub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vasub_vv_i32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vasub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vasub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vasub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vasub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vasub_vv_i32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vasub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vasub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vasub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vasub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vasub_vv_i32m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vasub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vasub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vasub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vasub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vasub_vv_i64m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vasub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vasub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vasub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vasub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vasub_vv_i64m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vasub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vasub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vasub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vasub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vasub_vv_i64m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vasub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vasub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vasub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vasub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vasub_vv_i64m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vasub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vasub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vasub_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vasub_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vasub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vasub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vasub_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vasub_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vasub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vasub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vasub_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vasub_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vasub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vasub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vasub_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vasub_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vasub_vv_i8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vasub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vasub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vasub_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vasub_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vasub_vv_i8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vasub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vasub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vasub_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vasub_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vasub_vv_i8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vasub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vasub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vasub_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vasub_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vasub_vv_i8m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vasub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vasub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vasub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vasub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vasub_vv_i16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vasub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vasub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vasub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vasub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vasub_vv_i16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vasub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vasub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vasub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vasub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vasub_vv_i16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vasub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vasub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vasub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vasub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vasub_vv_i16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vasub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vasub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vasub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vasub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vasub_vv_i16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vasub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vasub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vasub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vasub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vasub_vv_i16m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vasub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vasub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vasub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vasub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vasub_vv_i32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vasub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vasub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vasub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vasub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vasub_vv_i32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vasub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vasub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vasub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vasub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vasub_vv_i32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vasub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vasub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vasub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vasub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vasub_vv_i32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vasub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vasub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vasub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vasub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vasub_vv_i32m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vasub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vasub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vasub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vasub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vasub_vv_i64m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vasub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vasub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vasub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vasub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vasub_vv_i64m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vasub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vasub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vasub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vasub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vasub_vv_i64m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vasub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vasub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vasub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vasub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vasub_vv_i64m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vasub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vasub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vasub_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vasub_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vasub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vasub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vasub_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vasub_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vasub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vasub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vasub_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vasub_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vasub_vv_i8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vasub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vasub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vasub_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vasub_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vasub_vv_i8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vasub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vasub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vasub_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vasub_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vasub_vv_i8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vasub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vasub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vasub_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vasub_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vasub_vv_i8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vasub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vasub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vasub_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vasub_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vasub_vv_i8m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vasub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vasub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vasub_vx_i8m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vasub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vasub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vasub_vv_i16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vasub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vasub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vasub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vasub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vasub_vv_i16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vasub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vasub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vasub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vasub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vasub_vv_i16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vasub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vasub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vasub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vasub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vasub_vv_i16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vasub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vasub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vasub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vasub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vasub_vv_i16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vasub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vasub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vasub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vasub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vasub_vv_i16m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vasub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vasub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vasub_vx_i16m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vasub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vasub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vasub_vv_i32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vasub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vasub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vasub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vasub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vasub_vv_i32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vasub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vasub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vasub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vasub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vasub_vv_i32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vasub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vasub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vasub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vasub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vasub_vv_i32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vasub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vasub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vasub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vasub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vasub_vv_i32m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vasub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vasub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vasub_vx_i32m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vasub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vasub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vasub_vv_i64m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vasub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vasub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vasub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vasub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vasub_vv_i64m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vasub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vasub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vasub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vasub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vasub_vv_i64m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vasub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vasub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vasub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vasub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vasub_vv_i64m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vasub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vasub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vasub_vx_i64m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vasubu.c b/auto-generated/policy_funcs/llvm-api-tests/vasubu.c index 6b0a6b6a7..0e49abf02 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vasubu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vasubu.c @@ -5,706 +5,957 @@ #include -vuint8mf8_t test_vasubu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vasubu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vasubu_vv_u8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vasubu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vasubu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vasubu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vasubu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vasubu_vv_u8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vasubu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vasubu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vasubu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vasubu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vasubu_vv_u8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vasubu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vasubu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vasubu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vasubu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vasubu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vasubu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vasubu_vx_u8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vasubu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vasubu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vasubu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vasubu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vasubu_vx_u8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vasubu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vasubu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vasubu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vasubu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vasubu_vx_u8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vasubu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vasubu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vasubu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vasubu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vasubu_vx_u8m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vasubu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vasubu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vasubu_vv_u16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vasubu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vasubu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vasubu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vasubu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vasubu_vv_u16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vasubu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vasubu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vasubu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vasubu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vasubu_vv_u16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vasubu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vasubu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vasubu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vasubu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vasubu_vv_u16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vasubu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vasubu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vasubu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vasubu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vasubu_vv_u16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vasubu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vasubu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vasubu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vasubu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vasubu_vv_u16m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vasubu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vasubu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vasubu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vasubu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vasubu_vv_u32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vasubu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vasubu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vasubu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vasubu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vasubu_vv_u32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vasubu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vasubu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vasubu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vasubu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vasubu_vv_u32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vasubu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vasubu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vasubu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vasubu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vasubu_vv_u32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vasubu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vasubu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vasubu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vasubu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vasubu_vv_u32m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vasubu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vasubu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vasubu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vasubu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vasubu_vv_u64m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vasubu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vasubu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vasubu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vasubu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vasubu_vv_u64m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vasubu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vasubu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vasubu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vasubu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vasubu_vv_u64m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vasubu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vasubu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vasubu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vasubu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vasubu_vv_u64m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vasubu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vasubu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vasubu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vasubu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vasubu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vasubu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vasubu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vasubu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vasubu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vasubu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vasubu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vasubu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vasubu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vasubu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vasubu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vasubu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vasubu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vasubu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vasubu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vasubu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vasubu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vasubu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vasubu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vasubu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vasubu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vasubu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vasubu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vasubu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vasubu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vasubu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vasubu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vasubu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vasubu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vasubu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vasubu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vasubu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vasubu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vasubu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vasubu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vasubu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vasubu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vasubu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vasubu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vasubu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vasubu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vasubu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vasubu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vasubu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vasubu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vasubu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vasubu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vasubu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vasubu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vasubu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vasubu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vasubu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vasubu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vasubu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vasubu_vx_u32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vasubu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vasubu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vasubu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vasubu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vasubu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vasubu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vasubu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vasubu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vasubu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vasubu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vasubu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vasubu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vasubu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vasubu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vasubu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vasubu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vasubu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vasubu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vasubu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vasubu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vasubu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vasubu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vasubu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vasubu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vasubu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vasubu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vasubu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vasubu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vasubu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vasubu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vasubu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vasubu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vasubu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vasubu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vasubu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vasubu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vasubu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vasubu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vasubu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vasubu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vasubu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vasubu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vasubu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vasubu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vasubu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vasubu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vasubu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vasubu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vasubu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vasubu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vasubu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vasubu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vasubu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vasubu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vasubu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vasubu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vasubu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vasubu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vasubu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vasubu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vasubu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vasubu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vasubu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vasubu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vasubu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vasubu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vasubu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vasubu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vasubu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vasubu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vasubu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vasubu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vasubu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vasubu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vasubu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vasubu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vasubu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vasubu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vasubu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vasubu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vasubu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vasubu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vasubu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vasubu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vasubu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vasubu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vasubu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vasubu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vasubu_vx_u32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vasubu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vasubu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vasubu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vasubu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vasubu_vx_u32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vasubu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vasubu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vasubu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vasubu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vasubu_vx_u32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vasubu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vasubu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vasubu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vasubu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vasubu_vx_u32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vasubu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vasubu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vasubu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vasubu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vasubu_vx_u32m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vasubu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vasubu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vasubu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vasubu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vasubu_vx_u64m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vasubu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vasubu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vasubu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vasubu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vasubu_vx_u64m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vasubu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vasubu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vasubu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vasubu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vasubu_vx_u64m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vasubu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vasubu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vasubu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vasubu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vasubu_vx_u64m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vasubu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vasubu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vasubu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vasubu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vasubu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vasubu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vasubu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vasubu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vasubu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vasubu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vasubu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vasubu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vasubu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vasubu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vasubu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vasubu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vasubu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vasubu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vasubu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vasubu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vasubu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vasubu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vasubu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vasubu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vasubu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vasubu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vasubu_vv_u8m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vasubu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vasubu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vasubu_vx_u8m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vasubu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vasubu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vasubu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vasubu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vasubu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vasubu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vasubu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vasubu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vasubu_vx_u16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vasubu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vasubu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vasubu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vasubu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vasubu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vasubu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vasubu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vasubu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vasubu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vasubu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vasubu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vasubu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vasubu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vasubu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u16m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vasubu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vasubu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vasubu_vx_u16m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vasubu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vasubu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vasubu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vasubu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vasubu_vx_u32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vasubu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vasubu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vasubu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vasubu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vasubu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vasubu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vasubu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vasubu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vasubu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vasubu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vasubu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vasubu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vasubu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vasubu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u32m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vasubu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vasubu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vasubu_vx_u32m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vasubu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vasubu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vasubu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vasubu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vasubu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vasubu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vasubu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vasubu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vasubu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vasubu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vasubu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vasubu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vasubu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vasubu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vasubu_vv_u64m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vasubu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vasubu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vasubu_vx_u64m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vcompress.c b/auto-generated/policy_funcs/llvm-api-tests/vcompress.c index e14737c19..a7086168a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vcompress.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vcompress.c @@ -6,238 +6,297 @@ #include -vfloat16mf4_t test_vcompress_vm_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vbool64_t vs1, size_t vl) { +vfloat16mf4_t test_vcompress_vm_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vcompress_vm_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vbool32_t vs1, size_t vl) { +vfloat16mf2_t test_vcompress_vm_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vcompress_vm_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vbool16_t vs1, size_t vl) { +vfloat16m1_t test_vcompress_vm_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vcompress_vm_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vbool8_t vs1, size_t vl) { +vfloat16m2_t test_vcompress_vm_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vcompress_vm_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vbool4_t vs1, size_t vl) { +vfloat16m4_t test_vcompress_vm_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vbool4_t vs1, size_t vl) { return __riscv_vcompress_vm_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vcompress_vm_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vbool2_t vs1, size_t vl) { +vfloat16m8_t test_vcompress_vm_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vbool2_t vs1, size_t vl) { return __riscv_vcompress_vm_f16m8_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vcompress_vm_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vbool64_t vs1, size_t vl) { +vfloat32mf2_t test_vcompress_vm_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vcompress_vm_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vbool32_t vs1, size_t vl) { +vfloat32m1_t test_vcompress_vm_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vcompress_vm_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vbool16_t vs1, size_t vl) { +vfloat32m2_t test_vcompress_vm_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vcompress_vm_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vbool8_t vs1, size_t vl) { +vfloat32m4_t test_vcompress_vm_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vcompress_vm_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vbool4_t vs1, size_t vl) { +vfloat32m8_t test_vcompress_vm_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vbool4_t vs1, size_t vl) { return __riscv_vcompress_vm_f32m8_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vcompress_vm_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vbool64_t vs1, size_t vl) { +vfloat64m1_t test_vcompress_vm_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vcompress_vm_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vbool32_t vs1, size_t vl) { +vfloat64m2_t test_vcompress_vm_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vcompress_vm_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vbool16_t vs1, size_t vl) { +vfloat64m4_t test_vcompress_vm_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vcompress_vm_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vbool8_t vs1, size_t vl) { +vfloat64m8_t test_vcompress_vm_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_f64m8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vcompress_vm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vbool64_t vs1, size_t vl) { +vint8mf8_t test_vcompress_vm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vcompress_vm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vbool32_t vs1, size_t vl) { +vint8mf4_t test_vcompress_vm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vcompress_vm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vbool16_t vs1, size_t vl) { +vint8mf2_t test_vcompress_vm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_i8mf2_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vcompress_vm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vbool8_t vs1, size_t vl) { +vint8m1_t test_vcompress_vm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vbool8_t vs1, + size_t vl) { return __riscv_vcompress_vm_i8m1_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vcompress_vm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vbool4_t vs1, size_t vl) { +vint8m2_t test_vcompress_vm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vbool4_t vs1, + size_t vl) { return __riscv_vcompress_vm_i8m2_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vcompress_vm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vbool2_t vs1, size_t vl) { +vint8m4_t test_vcompress_vm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vbool2_t vs1, + size_t vl) { return __riscv_vcompress_vm_i8m4_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vcompress_vm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vbool1_t vs1, size_t vl) { +vint8m8_t test_vcompress_vm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vbool1_t vs1, + size_t vl) { return __riscv_vcompress_vm_i8m8_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vcompress_vm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vbool64_t vs1, size_t vl) { +vint16mf4_t test_vcompress_vm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vcompress_vm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vbool32_t vs1, size_t vl) { +vint16mf2_t test_vcompress_vm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_i16mf2_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vcompress_vm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vbool16_t vs1, size_t vl) { +vint16m1_t test_vcompress_vm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_i16m1_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vcompress_vm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vbool8_t vs1, size_t vl) { +vint16m2_t test_vcompress_vm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_i16m2_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vcompress_vm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vbool4_t vs1, size_t vl) { +vint16m4_t test_vcompress_vm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + vbool4_t vs1, size_t vl) { return __riscv_vcompress_vm_i16m4_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vcompress_vm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vbool2_t vs1, size_t vl) { +vint16m8_t test_vcompress_vm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + vbool2_t vs1, size_t vl) { return __riscv_vcompress_vm_i16m8_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vcompress_vm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vbool64_t vs1, size_t vl) { +vint32mf2_t test_vcompress_vm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_i32mf2_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vcompress_vm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vbool32_t vs1, size_t vl) { +vint32m1_t test_vcompress_vm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_i32m1_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vcompress_vm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vbool16_t vs1, size_t vl) { +vint32m2_t test_vcompress_vm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_i32m2_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vcompress_vm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vbool8_t vs1, size_t vl) { +vint32m4_t test_vcompress_vm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_i32m4_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vcompress_vm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vbool4_t vs1, size_t vl) { +vint32m8_t test_vcompress_vm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + vbool4_t vs1, size_t vl) { return __riscv_vcompress_vm_i32m8_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vcompress_vm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vbool64_t vs1, size_t vl) { +vint64m1_t test_vcompress_vm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_i64m1_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vcompress_vm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vbool32_t vs1, size_t vl) { +vint64m2_t test_vcompress_vm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_i64m2_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vcompress_vm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vbool16_t vs1, size_t vl) { +vint64m4_t test_vcompress_vm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_i64m4_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vcompress_vm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vbool8_t vs1, size_t vl) { +vint64m8_t test_vcompress_vm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_i64m8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vcompress_vm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vbool64_t vs1, size_t vl) { +vuint8mf8_t test_vcompress_vm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vcompress_vm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vbool32_t vs1, size_t vl) { +vuint8mf4_t test_vcompress_vm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vcompress_vm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vbool16_t vs1, size_t vl) { +vuint8mf2_t test_vcompress_vm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vcompress_vm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vbool8_t vs1, size_t vl) { +vuint8m1_t test_vcompress_vm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vcompress_vm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vbool4_t vs1, size_t vl) { +vuint8m2_t test_vcompress_vm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vbool4_t vs1, size_t vl) { return __riscv_vcompress_vm_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vcompress_vm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vbool2_t vs1, size_t vl) { +vuint8m4_t test_vcompress_vm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vbool2_t vs1, size_t vl) { return __riscv_vcompress_vm_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vcompress_vm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vbool1_t vs1, size_t vl) { +vuint8m8_t test_vcompress_vm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vbool1_t vs1, size_t vl) { return __riscv_vcompress_vm_u8m8_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vcompress_vm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vbool64_t vs1, size_t vl) { +vuint16mf4_t test_vcompress_vm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vcompress_vm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vbool32_t vs1, size_t vl) { +vuint16mf2_t test_vcompress_vm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vcompress_vm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vbool16_t vs1, size_t vl) { +vuint16m1_t test_vcompress_vm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vcompress_vm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vbool8_t vs1, size_t vl) { +vuint16m2_t test_vcompress_vm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vcompress_vm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vbool4_t vs1, size_t vl) { +vuint16m4_t test_vcompress_vm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vbool4_t vs1, size_t vl) { return __riscv_vcompress_vm_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vcompress_vm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vbool2_t vs1, size_t vl) { +vuint16m8_t test_vcompress_vm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vbool2_t vs1, size_t vl) { return __riscv_vcompress_vm_u16m8_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vcompress_vm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vbool64_t vs1, size_t vl) { +vuint32mf2_t test_vcompress_vm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vcompress_vm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vbool32_t vs1, size_t vl) { +vuint32m1_t test_vcompress_vm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vcompress_vm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vbool16_t vs1, size_t vl) { +vuint32m2_t test_vcompress_vm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vcompress_vm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vbool8_t vs1, size_t vl) { +vuint32m4_t test_vcompress_vm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vcompress_vm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vbool4_t vs1, size_t vl) { +vuint32m8_t test_vcompress_vm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vbool4_t vs1, size_t vl) { return __riscv_vcompress_vm_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vcompress_vm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vbool64_t vs1, size_t vl) { +vuint64m1_t test_vcompress_vm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vbool64_t vs1, size_t vl) { return __riscv_vcompress_vm_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vcompress_vm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vbool32_t vs1, size_t vl) { +vuint64m2_t test_vcompress_vm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vbool32_t vs1, size_t vl) { return __riscv_vcompress_vm_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vcompress_vm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vbool16_t vs1, size_t vl) { +vuint64m4_t test_vcompress_vm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vbool16_t vs1, size_t vl) { return __riscv_vcompress_vm_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vcompress_vm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vbool8_t vs1, size_t vl) { +vuint64m8_t test_vcompress_vm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vbool8_t vs1, size_t vl) { return __riscv_vcompress_vm_u64m8_tu(vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vdiv.c b/auto-generated/policy_funcs/llvm-api-tests/vdiv.c index ee545c0dc..ba85efa6e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vdiv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vdiv.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vdiv_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vdiv_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vdiv_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vdiv_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vdiv_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vdiv_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vdiv_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vdiv_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vdiv_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vdiv_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vdiv_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vdiv_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vdiv_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vdiv_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vdiv_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vdiv_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vdiv_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vdiv_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vdiv_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vdiv_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vdiv_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vdiv_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vdiv_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vdiv_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vdiv_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vdiv_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vdiv_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vdiv_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vdiv_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vdiv_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vdiv_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vdiv_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vdiv_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vdiv_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vdiv_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vdiv_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vdiv_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vdiv_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vdiv_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vdiv_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vdiv_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vdiv_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vdiv_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vdiv_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vdiv_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vdiv_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vdiv_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vdiv_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vdiv_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vdiv_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vdiv_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vdiv_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vdiv_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vdiv_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vdiv_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vdiv_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vdiv_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vdiv_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vdiv_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vdiv_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vdiv_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vdiv_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vdiv_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vdiv_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vdiv_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vdiv_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vdiv_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vdiv_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vdiv_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vdiv_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vdiv_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vdiv_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vdiv_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vdiv_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vdiv_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vdiv_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vdiv_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vdiv_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vdiv_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vdiv_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vdiv_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vdiv_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vdiv_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vdiv_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vdiv_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vdiv_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vdiv_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vdiv_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vdiv_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vdiv_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vdiv_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vdiv_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vdiv_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vdiv_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vdiv_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vdiv_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vdiv_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vdiv_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vdiv_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vdiv_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vdiv_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vdiv_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vdiv_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vdiv_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vdiv_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vdiv_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vdiv_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vdiv_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vdiv_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vdiv_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vdiv_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vdiv_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vdiv_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vdiv_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vdiv_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vdiv_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vdiv_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vdiv_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vdiv_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vdiv_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vdiv_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vdiv_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vdiv_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vdiv_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vdiv_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vdiv_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vdiv_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vdiv_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vdiv_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vdiv_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vdiv_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vdiv_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vdiv_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vdiv_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vdiv_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vdiv_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vdiv_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vdiv_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vdiv_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vdiv_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vdiv_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vdiv_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vdiv_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vdiv_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vdiv_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vdiv_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vdiv_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vdiv_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vdiv_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vdiv_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vdiv_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vdiv_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vdiv_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vdiv_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vdiv_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vdiv_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vdiv_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vdiv_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vdiv_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vdiv_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vdiv_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vdiv_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vdiv_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vdiv_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vdiv_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vdiv_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vdiv_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vdiv_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vdiv_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vdiv_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vdiv_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vdiv_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vdiv_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vdiv_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vdiv_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vdiv_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vdiv_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vdiv_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vdiv_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vdiv_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vdiv_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vdiv_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vdiv_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vdiv_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vdiv_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vdiv_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vdiv_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vdiv_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vdiv_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vdiv_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vdiv_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vdiv_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vdiv_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vdiv_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vdiv_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vdiv_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vdiv_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vdiv_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vdiv_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vdiv_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vdiv_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vdiv_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vdiv_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vdiv_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vdiv_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vdiv_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vdiv_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vdiv_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vdiv_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vdiv_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vdiv_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vdiv_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vdiv_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vdiv_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vdiv_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vdiv_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vdiv_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vdiv_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vdiv_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vdiv_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vdiv_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vdiv_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vdiv_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vdiv_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vdiv_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vdiv_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vdiv_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vdiv_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vdiv_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vdiv_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vdiv_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vdiv_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vdiv_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vdiv_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vdiv_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vdiv_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vdiv_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vdiv_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vdiv_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vdiv_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vdiv_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vdiv_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vdiv_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vdiv_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vdiv_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vdiv_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vdiv_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vdiv_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vdiv_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vdiv_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vdiv_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vdiv_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vdiv_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vdiv_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vdiv_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vdiv_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vdiv_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vdiv_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vdiv_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vdiv_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vdiv_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vdiv_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vdiv_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vdiv_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vdiv_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vdiv_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vdiv_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vdiv_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vdiv_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vdiv_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vdiv_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vdiv_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vdiv_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vdiv_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vdiv_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vdiv_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vdiv_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vdiv_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vdiv_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vdiv_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vdiv_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vdiv_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vdiv_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vdiv_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vdiv_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vdiv_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vdiv_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vdiv_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vdiv_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vdiv_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vdiv_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vdiv_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vdiv_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vdiv_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vdiv_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vdiv_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vdiv_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vdiv_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vdiv_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vdiv_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vdiv_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vdiv_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vdiv_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vdiv_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vdiv_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vdiv_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vdiv_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vdiv_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vdiv_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vdiv_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vdiv_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vdiv_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vdiv_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vdiv_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vdiv_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vdiv_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vdiv_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vdiv_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vdiv_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vdiv_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vdiv_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vdiv_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vdiv_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vdiv_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vdiv_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vdiv_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vdiv_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vdiv_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vdiv_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vdiv_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vdiv_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vdiv_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vdiv_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vdiv_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vdiv_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vdiv_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vdiv_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vdiv_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vdiv_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vdiv_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vdiv_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vdiv_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vdiv_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vdiv_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vdiv_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vdiv_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vdiv_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vdiv_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vdiv_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vdiv_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vdiv_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vdiv_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vdiv_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vdiv_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vdiv_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vdiv_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vdiv_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vdiv_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vdiv_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vdiv_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vdiv_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vdiv_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vdiv_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vdiv_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vdiv_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vdiv_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vdiv_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vdiv_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vdiv_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vdiv_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vdiv_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vdiv_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vdiv_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vdiv_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vdiv_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vdiv_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vdiv_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vdiv_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vdiv_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vdiv_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vdiv_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vdiv_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vdiv_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vdiv_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vdiv_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vdiv_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vdiv_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vdiv_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vdiv_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vdiv_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vdiv_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vdiv_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vdiv_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vdiv_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vdiv_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vdivu.c b/auto-generated/policy_funcs/llvm-api-tests/vdivu.c index 72320a598..70b222306 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vdivu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vdivu.c @@ -5,706 +5,939 @@ #include -vuint8mf8_t test_vdivu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vdivu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vdivu_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vdivu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vdivu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vdivu_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vdivu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vdivu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vdivu_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vdivu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vdivu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vdivu_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vdivu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vdivu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vdivu_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vdivu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vdivu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vdivu_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vdivu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vdivu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vdivu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vdivu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vdivu_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vdivu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vdivu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vdivu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vdivu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vdivu_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vdivu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vdivu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vdivu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vdivu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vdivu_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vdivu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vdivu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vdivu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vdivu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vdivu_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vdivu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vdivu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vdivu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vdivu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vdivu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vdivu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vdivu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vdivu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vdivu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vdivu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vdivu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vdivu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vdivu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vdivu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vdivu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vdivu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vdivu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vdivu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vdivu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vdivu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vdivu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vdivu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vdivu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vdivu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vdivu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vdivu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vdivu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vdivu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vdivu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vdivu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vdivu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vdivu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vdivu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vdivu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vdivu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vdivu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vdivu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vdivu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vdivu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vdivu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vdivu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vdivu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vdivu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vdivu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vdivu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vdivu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vdivu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vdivu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vdivu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vdivu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vdivu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vdivu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vdivu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vdivu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vdivu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vdivu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vdivu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vdivu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vdivu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vdivu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vdivu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vdivu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vdivu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vdivu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vdivu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vdivu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vdivu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vdivu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vdivu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vdivu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vdivu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vdivu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vdivu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vdivu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vdivu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vdivu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vdivu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vdivu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vdivu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vdivu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vdivu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vdivu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vdivu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vdivu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vdivu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vdivu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vdivu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vdivu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vdivu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vdivu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vdivu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vdivu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vdivu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vdivu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vdivu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vdivu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vdivu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vdivu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vdivu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vdivu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vdivu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vdivu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vdivu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vdivu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vdivu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vdivu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vdivu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vdivu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vdivu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vdivu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vdivu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vdivu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vdivu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vdivu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vdivu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vdivu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vdivu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vdivu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vdivu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vdivu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vdivu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vdivu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vdivu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vdivu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vdivu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vdivu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vdivu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vdivu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vdivu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vdivu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vdivu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vdivu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vdivu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vdivu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vdivu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vdivu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vdivu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vdivu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vdivu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vdivu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vdivu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vdivu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vdivu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vdivu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vdivu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vdivu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vdivu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vdivu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vdivu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vdivu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vdivu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vdivu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vdivu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vdivu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vdivu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vdivu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vdivu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vdivu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vdivu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vdivu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vdivu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vdivu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vdivu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vdivu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vdivu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vdivu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vdivu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vdivu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vdivu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vdivu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vdivu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vdivu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vdivu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vdivu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vdivu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vdivu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vdivu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vdivu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vdivu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vdivu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vdivu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vdivu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vdivu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vdivu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vdivu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vdivu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vdivu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vdivu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vdivu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vdivu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vdivu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vdivu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vdivu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vdivu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vdivu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vdivu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vdivu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vdivu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vdivu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vdivu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vdivu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vdivu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vdivu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vdivu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vdivu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vdivu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vdivu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vdivu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vdivu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vdivu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vdivu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vdivu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vdivu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vdivu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vdivu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vdivu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vdivu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vdivu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vdivu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vdivu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vdivu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vdivu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vdivu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vdivu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vdivu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vdivu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vdivu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vdivu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vdivu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vdivu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vdivu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vdivu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vdivu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vdivu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vdivu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vdivu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vdivu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vdivu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vdivu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vdivu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vdivu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vdivu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vdivu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vdivu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vdivu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vdivu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vdivu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vdivu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vdivu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vdivu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vdivu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vdivu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vdivu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vdivu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vdivu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vdivu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vdivu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vdivu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vdivu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vdivu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vdivu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vdivu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vdivu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vdivu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vdivu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vdivu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vdivu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vdivu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vdivu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vdivu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vdivu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vdivu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vdivu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vdivu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vdivu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vdivu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vdivu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vdivu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vdivu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vdivu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vdivu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vdivu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vdivu_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vdivu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vdivu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vdivu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vdivu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vdivu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vdivu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vdivu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vdivu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vdivu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vdivu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vdivu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vdivu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vdivu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vdivu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vdivu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vdivu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vdivu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vdivu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vdivu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vdivu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vdivu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vdivu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vdivu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vdivu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vdivu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vdivu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vdivu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vdivu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vdivu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vdivu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vdivu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vdivu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vdivu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vdivu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vdivu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vdivu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vdivu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vdivu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vdivu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vdivu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vdivu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vdivu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vdivu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vdivu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vdivu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vdivu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vdivu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vdivu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vdivu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vdivu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vdivu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vdivu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vdivu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vdivu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vdivu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vdivu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vdivu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vdivu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vdivu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vdivu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vdivu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vdivu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vdivu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vdivu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vdivu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vdivu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vdivu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vdivu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vdivu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vdivu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vdivu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfabs.c b/auto-generated/policy_funcs/llvm-api-tests/vfabs.c index 1d7c68a47..dcadfa2ba 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfabs.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfabs.c @@ -6,242 +6,302 @@ #include -vfloat16mf4_t test_vfabs_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfabs_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfabs_v_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfabs_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfabs_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfabs_v_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfabs_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfabs_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfabs_v_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfabs_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfabs_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfabs_v_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfabs_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfabs_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfabs_v_f16m4_tu(vd, vs2, vl); } -vfloat16m8_t test_vfabs_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfabs_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfabs_v_f16m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfabs_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfabs_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfabs_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfabs_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfabs_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfabs_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfabs_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfabs_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfabs_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfabs_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfabs_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfabs_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfabs_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfabs_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfabs_v_f32m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfabs_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfabs_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfabs_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfabs_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfabs_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfabs_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfabs_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfabs_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfabs_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfabs_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfabs_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfabs_v_f64m8_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfabs_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfabs_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfabs_v_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfabs_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfabs_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfabs_v_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfabs_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfabs_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfabs_v_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfabs_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfabs_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfabs_v_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfabs_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfabs_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfabs_v_f16m4_tum(vm, vd, vs2, vl); } -vfloat16m8_t test_vfabs_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfabs_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfabs_v_f16m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfabs_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfabs_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfabs_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfabs_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfabs_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfabs_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfabs_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfabs_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfabs_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfabs_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfabs_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfabs_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfabs_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfabs_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfabs_v_f32m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfabs_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfabs_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfabs_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfabs_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfabs_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfabs_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfabs_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfabs_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfabs_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfabs_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfabs_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfabs_v_f64m8_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfabs_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfabs_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfabs_v_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfabs_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfabs_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfabs_v_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfabs_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfabs_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfabs_v_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfabs_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfabs_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfabs_v_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfabs_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfabs_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfabs_v_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfabs_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfabs_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfabs_v_f16m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfabs_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfabs_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfabs_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfabs_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfabs_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfabs_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfabs_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfabs_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfabs_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfabs_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfabs_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfabs_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfabs_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfabs_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfabs_v_f32m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfabs_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfabs_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfabs_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfabs_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfabs_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfabs_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfabs_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfabs_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfabs_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfabs_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfabs_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfabs_v_f64m8_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfabs_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfabs_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfabs_v_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfabs_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfabs_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfabs_v_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfabs_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfabs_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfabs_v_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfabs_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfabs_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfabs_v_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfabs_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfabs_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfabs_v_f16m4_mu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfabs_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfabs_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfabs_v_f16m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfabs_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfabs_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfabs_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfabs_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfabs_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfabs_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfabs_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfabs_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfabs_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfabs_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfabs_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfabs_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfabs_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfabs_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfabs_v_f32m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfabs_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfabs_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfabs_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfabs_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfabs_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfabs_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfabs_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfabs_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfabs_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfabs_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfabs_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfabs_v_f64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfadd.c b/auto-generated/policy_funcs/llvm-api-tests/vfadd.c index b3f192232..9109a9bae 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfadd.c @@ -6,962 +6,1349 @@ #include -vfloat16mf4_t test_vfadd_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfadd_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfadd_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfadd_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfadd_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfadd_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfadd_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfadd_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfadd_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfadd_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfadd_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfadd_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfadd_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfadd_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfadd_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfadd_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfadd_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfadd_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfadd_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfadd_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfadd_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfadd_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfadd_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfadd_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfadd_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfadd_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfadd_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfadd_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfadd_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfadd_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfadd_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfadd_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfadd_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfadd_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfadd_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfadd_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfadd_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfadd_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfadd_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfadd_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfadd_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfadd_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfadd_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfadd_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfadd_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfadd_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfadd_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfadd_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfadd_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfadd_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfadd_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfadd_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfadd_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfadd_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfadd_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfadd_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfadd_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfadd_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfadd_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfadd_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfadd_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfadd_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfadd_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfadd_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfadd_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfadd_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfadd_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfadd_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfadd_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfadd_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfadd_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfadd_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfadd_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfadd_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfadd_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfadd_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfadd_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfadd_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfadd_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfadd_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfadd_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfadd_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfadd_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfadd_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfadd_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfadd_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfadd_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfadd_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfadd_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfadd_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfadd_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfadd_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfadd_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfadd_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfadd_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfadd_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfadd_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfadd_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfadd_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfadd_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfadd_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfadd_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfadd_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfadd_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfadd_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfadd_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfadd_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfadd_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfadd_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfadd_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfadd_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfadd_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfadd_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfadd_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfadd_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfadd_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfadd_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfadd_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfadd_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfadd_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfadd_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfadd_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfadd_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfadd_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfadd_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfadd_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfadd_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfadd_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfadd_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfadd_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfadd_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfadd_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfadd_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfadd_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfadd_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfadd_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfadd_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfadd_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfadd_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfadd_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfadd_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfadd_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfadd_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfadd_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfadd_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfadd_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfadd_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfadd_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfadd_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfadd_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfadd_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfadd_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfadd_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfadd_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfadd_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfadd_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfadd_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfadd_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfadd_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfadd_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfadd_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfadd_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfadd_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfadd_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfadd_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfadd_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfadd_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfadd_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfadd_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfadd_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfadd_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfadd_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfadd_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfadd_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfadd_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfadd_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfadd_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfadd_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfadd_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfadd_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfadd_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfadd_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfadd_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfadd_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfadd_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfadd_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfadd_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfadd_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfadd_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfadd_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfadd_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfadd_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfadd_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfadd_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfadd_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfadd_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfadd_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfadd_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfadd_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfadd_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfadd_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfadd_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfadd_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfadd_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfadd_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfadd_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfadd_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfadd_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfadd_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfadd_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfadd_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfadd_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfadd_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfadd_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfadd_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfadd_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfadd_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfadd_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfadd_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfadd_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfadd_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfadd_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfadd_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfadd_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfadd_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfadd_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfadd_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfadd_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfadd_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfadd_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfadd_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfadd_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfadd_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfadd_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfadd_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfadd_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfadd_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfadd_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfadd_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfadd_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfadd_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfadd_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfadd_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfadd_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfadd_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfadd_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfadd_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfadd_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfadd_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfadd_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfadd_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfadd_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfadd_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfadd_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfadd_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfadd_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfadd_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfadd_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfadd_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfadd_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfadd_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfadd_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfadd_vv_f16mf4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfadd_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfadd_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16mf4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfadd_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfadd_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfadd_vv_f16mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfadd_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfadd_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfadd_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfadd_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfadd_vv_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfadd_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfadd_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfadd_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfadd_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfadd_vv_f16m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfadd_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfadd_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfadd_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfadd_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfadd_vv_f16m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfadd_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfadd_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfadd_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfadd_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfadd_vv_f16m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfadd_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfadd_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfadd_vf_f16m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfadd_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfadd_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfadd_vv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfadd_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfadd_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfadd_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfadd_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfadd_vv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfadd_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfadd_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfadd_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfadd_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfadd_vv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfadd_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfadd_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfadd_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfadd_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfadd_vv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfadd_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfadd_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfadd_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfadd_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfadd_vv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfadd_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfadd_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfadd_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfadd_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfadd_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfadd_vv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfadd_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfadd_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfadd_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfadd_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfadd_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfadd_vv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfadd_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfadd_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfadd_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfadd_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfadd_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfadd_vv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfadd_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfadd_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfadd_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfadd_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfadd_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfadd_vv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfadd_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfadd_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfadd_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfadd_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfadd_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfadd_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfadd_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfadd_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfadd_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfadd_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfadd_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfadd_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfadd_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfadd_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfadd_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfadd_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfadd_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfadd_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfadd_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfadd_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfadd_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfadd_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfadd_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfadd_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfadd_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfadd_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfadd_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfadd_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfadd_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfadd_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfadd_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfadd_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfadd_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfadd_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfadd_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfadd_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfadd_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfadd_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfadd_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfadd_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfadd_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfadd_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfadd_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfadd_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfadd_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfadd_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfadd_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfadd_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfadd_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfadd_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfadd_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfadd_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfadd_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfadd_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfadd_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfadd_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfadd_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfadd_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfadd_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfadd_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfadd_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfadd_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfadd_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfadd_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfadd_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfadd_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfadd_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfadd_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfadd_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfadd_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfadd_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfadd_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfadd_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfadd_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfadd_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfadd_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfadd_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfadd_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfadd_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfadd_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfadd_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfadd_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfadd_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfadd_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfadd_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfadd_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfadd_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfadd_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfadd_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfadd_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfadd_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfadd_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfadd_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfadd_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfadd_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfadd_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfadd_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfadd_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfadd_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfadd_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfadd_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfadd_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfadd_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfadd_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfadd_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfadd_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfadd_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfadd_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfadd_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfadd_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfadd_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfadd_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfadd_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfadd_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfadd_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfadd_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfadd_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfadd_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfadd_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfadd_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfadd_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfadd_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfadd_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfadd_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfadd_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfadd_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfadd_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfadd_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfadd_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfadd_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfadd_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfadd_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfadd_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfadd_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfadd_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfadd_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfadd_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfadd_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfadd_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfadd_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfadd_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfadd_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfadd_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfadd_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfadd_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f16m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfadd_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfadd_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfadd_vf_f16m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfadd_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfadd_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfadd_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfadd_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfadd_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfadd_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfadd_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfadd_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfadd_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfadd_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfadd_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfadd_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfadd_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfadd_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfadd_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfadd_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfadd_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfadd_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfadd_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfadd_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfadd_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfadd_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfadd_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfadd_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfadd_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfadd_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfadd_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfadd_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfadd_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfadd_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfadd_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfadd_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfadd_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfadd_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfadd_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfadd_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfadd_vv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfadd_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfadd_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfadd_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfclass.c b/auto-generated/policy_funcs/llvm-api-tests/vfclass.c index 9e6eb6962..6243dcb90 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfclass.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfclass.c @@ -6,242 +6,302 @@ #include -vuint16mf4_t test_vfclass_v_u16mf4_tu(vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfclass_v_u16mf4_tu(vuint16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfclass_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vfclass_v_u16mf2_tu(vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfclass_v_u16mf2_tu(vuint16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfclass_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vfclass_v_u16m1_tu(vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfclass_v_u16m1_tu(vuint16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfclass_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vfclass_v_u16m2_tu(vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfclass_v_u16m2_tu(vuint16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfclass_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vfclass_v_u16m4_tu(vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfclass_v_u16m4_tu(vuint16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfclass_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vfclass_v_u16m8_tu(vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfclass_v_u16m8_tu(vuint16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfclass_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vfclass_v_u32mf2_tu(vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfclass_v_u32mf2_tu(vuint32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfclass_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vfclass_v_u32m1_tu(vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfclass_v_u32m1_tu(vuint32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfclass_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vfclass_v_u32m2_tu(vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfclass_v_u32m2_tu(vuint32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfclass_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vfclass_v_u32m4_tu(vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfclass_v_u32m4_tu(vuint32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfclass_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vfclass_v_u32m8_tu(vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfclass_v_u32m8_tu(vuint32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfclass_v_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vfclass_v_u64m1_tu(vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfclass_v_u64m1_tu(vuint64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfclass_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vfclass_v_u64m2_tu(vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfclass_v_u64m2_tu(vuint64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfclass_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vfclass_v_u64m4_tu(vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfclass_v_u64m4_tu(vuint64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfclass_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vfclass_v_u64m8_tu(vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfclass_v_u64m8_tu(vuint64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfclass_v_u64m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vfclass_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfclass_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfclass_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vfclass_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfclass_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfclass_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vfclass_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfclass_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfclass_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vfclass_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfclass_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfclass_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vfclass_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfclass_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfclass_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vfclass_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfclass_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfclass_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vfclass_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfclass_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfclass_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vfclass_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfclass_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfclass_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vfclass_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfclass_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfclass_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vfclass_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfclass_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfclass_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vfclass_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfclass_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfclass_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vfclass_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfclass_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfclass_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vfclass_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfclass_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfclass_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vfclass_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfclass_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfclass_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vfclass_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfclass_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfclass_v_u64m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vfclass_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfclass_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfclass_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfclass_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfclass_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfclass_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vfclass_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfclass_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfclass_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vfclass_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfclass_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfclass_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vfclass_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfclass_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfclass_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vfclass_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfclass_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfclass_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfclass_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfclass_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfclass_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vfclass_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfclass_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfclass_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vfclass_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfclass_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfclass_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vfclass_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfclass_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfclass_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vfclass_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfclass_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfclass_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vfclass_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfclass_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfclass_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vfclass_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfclass_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfclass_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vfclass_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfclass_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfclass_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vfclass_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfclass_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfclass_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfclass_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfclass_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfclass_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfclass_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfclass_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfclass_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vfclass_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfclass_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfclass_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vfclass_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfclass_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfclass_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vfclass_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfclass_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfclass_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vfclass_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfclass_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfclass_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfclass_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfclass_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfclass_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vfclass_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfclass_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfclass_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vfclass_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfclass_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfclass_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vfclass_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfclass_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfclass_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vfclass_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfclass_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfclass_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vfclass_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfclass_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfclass_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vfclass_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfclass_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfclass_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vfclass_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfclass_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfclass_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vfclass_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfclass_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfclass_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfcvt.c b/auto-generated/policy_funcs/llvm-api-tests/vfcvt.c index 9ad815e83..2a9bf75d8 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfcvt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfcvt.c @@ -6,1922 +6,2402 @@ #include -vint16mf4_t test_vfcvt_x_f_v_i16mf4_tu(vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_x_f_v_i16mf4_tu(vint16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16mf4_tu(vd, vs2, vl); } -vint16mf2_t test_vfcvt_x_f_v_i16mf2_tu(vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_x_f_v_i16mf2_tu(vint16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16mf2_tu(vd, vs2, vl); } -vint16m1_t test_vfcvt_x_f_v_i16m1_tu(vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_x_f_v_i16m1_tu(vint16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16m1_tu(vd, vs2, vl); } -vint16m2_t test_vfcvt_x_f_v_i16m2_tu(vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_x_f_v_i16m2_tu(vint16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16m2_tu(vd, vs2, vl); } -vint16m4_t test_vfcvt_x_f_v_i16m4_tu(vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_x_f_v_i16m4_tu(vint16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16m4_tu(vd, vs2, vl); } -vint16m8_t test_vfcvt_x_f_v_i16m8_tu(vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_x_f_v_i16m8_tu(vint16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_tu(vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_tu(vuint16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_tu(vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_tu(vuint16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vfcvt_xu_f_v_u16m1_tu(vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_xu_f_v_u16m1_tu(vuint16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vfcvt_xu_f_v_u16m2_tu(vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_xu_f_v_u16m2_tu(vuint16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vfcvt_xu_f_v_u16m4_tu(vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_xu_f_v_u16m4_tu(vuint16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vfcvt_xu_f_v_u16m8_tu(vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_xu_f_v_u16m8_tu(vuint16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16m8_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_tu(vfloat16mf4_t vd, vint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_tu(vfloat16mf4_t vd, vint16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_tu(vfloat16mf2_t vd, vint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_tu(vfloat16mf2_t vd, vint16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfcvt_f_x_v_f16m1_tu(vfloat16m1_t vd, vint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_x_v_f16m1_tu(vfloat16m1_t vd, vint16m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfcvt_f_x_v_f16m2_tu(vfloat16m2_t vd, vint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_x_v_f16m2_tu(vfloat16m2_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfcvt_f_x_v_f16m4_tu(vfloat16m4_t vd, vint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_x_v_f16m4_tu(vfloat16m4_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16m4_tu(vd, vs2, vl); } -vfloat16m8_t test_vfcvt_f_x_v_f16m8_tu(vfloat16m8_t vd, vint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_x_v_f16m8_tu(vfloat16m8_t vd, vint16m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16m8_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_tu(vfloat16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_tu(vfloat16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_tu(vfloat16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_tu(vfloat16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfcvt_f_xu_v_f16m1_tu(vfloat16m1_t vd, vuint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_xu_v_f16m1_tu(vfloat16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfcvt_f_xu_v_f16m2_tu(vfloat16m2_t vd, vuint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_xu_v_f16m2_tu(vfloat16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfcvt_f_xu_v_f16m4_tu(vfloat16m4_t vd, vuint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_xu_v_f16m4_tu(vfloat16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16m4_tu(vd, vs2, vl); } -vfloat16m8_t test_vfcvt_f_xu_v_f16m8_tu(vfloat16m8_t vd, vuint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_xu_v_f16m8_tu(vfloat16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16m8_tu(vd, vs2, vl); } -vint32mf2_t test_vfcvt_x_f_v_i32mf2_tu(vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_x_f_v_i32mf2_tu(vint32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32mf2_tu(vd, vs2, vl); } -vint32m1_t test_vfcvt_x_f_v_i32m1_tu(vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_x_f_v_i32m1_tu(vint32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32m1_tu(vd, vs2, vl); } -vint32m2_t test_vfcvt_x_f_v_i32m2_tu(vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_x_f_v_i32m2_tu(vint32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32m2_tu(vd, vs2, vl); } -vint32m4_t test_vfcvt_x_f_v_i32m4_tu(vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_x_f_v_i32m4_tu(vint32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32m4_tu(vd, vs2, vl); } -vint32m8_t test_vfcvt_x_f_v_i32m8_tu(vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_x_f_v_i32m8_tu(vint32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_tu(vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_tu(vuint32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vfcvt_xu_f_v_u32m1_tu(vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_xu_f_v_u32m1_tu(vuint32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vfcvt_xu_f_v_u32m2_tu(vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_xu_f_v_u32m2_tu(vuint32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vfcvt_xu_f_v_u32m4_tu(vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_xu_f_v_u32m4_tu(vuint32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vfcvt_xu_f_v_u32m8_tu(vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_xu_f_v_u32m8_tu(vuint32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_tu(vfloat32mf2_t vd, vint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_tu(vfloat32mf2_t vd, vint32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfcvt_f_x_v_f32m1_tu(vfloat32m1_t vd, vint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_x_v_f32m1_tu(vfloat32m1_t vd, vint32m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfcvt_f_x_v_f32m2_tu(vfloat32m2_t vd, vint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_x_v_f32m2_tu(vfloat32m2_t vd, vint32m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfcvt_f_x_v_f32m4_tu(vfloat32m4_t vd, vint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_x_v_f32m4_tu(vfloat32m4_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfcvt_f_x_v_f32m8_tu(vfloat32m8_t vd, vint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_x_v_f32m8_tu(vfloat32m8_t vd, vint32m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_tu(vfloat32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_tu(vfloat32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfcvt_f_xu_v_f32m1_tu(vfloat32m1_t vd, vuint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_xu_v_f32m1_tu(vfloat32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfcvt_f_xu_v_f32m2_tu(vfloat32m2_t vd, vuint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_xu_v_f32m2_tu(vfloat32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfcvt_f_xu_v_f32m4_tu(vfloat32m4_t vd, vuint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_xu_v_f32m4_tu(vfloat32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfcvt_f_xu_v_f32m8_tu(vfloat32m8_t vd, vuint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_xu_v_f32m8_tu(vfloat32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32m8_tu(vd, vs2, vl); } -vint64m1_t test_vfcvt_x_f_v_i64m1_tu(vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_x_f_v_i64m1_tu(vint64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i64m1_tu(vd, vs2, vl); } -vint64m2_t test_vfcvt_x_f_v_i64m2_tu(vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_x_f_v_i64m2_tu(vint64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i64m2_tu(vd, vs2, vl); } -vint64m4_t test_vfcvt_x_f_v_i64m4_tu(vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_x_f_v_i64m4_tu(vint64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i64m4_tu(vd, vs2, vl); } -vint64m8_t test_vfcvt_x_f_v_i64m8_tu(vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_x_f_v_i64m8_tu(vint64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i64m8_tu(vd, vs2, vl); } -vuint64m1_t test_vfcvt_xu_f_v_u64m1_tu(vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_xu_f_v_u64m1_tu(vuint64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vfcvt_xu_f_v_u64m2_tu(vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_xu_f_v_u64m2_tu(vuint64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vfcvt_xu_f_v_u64m4_tu(vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_xu_f_v_u64m4_tu(vuint64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vfcvt_xu_f_v_u64m8_tu(vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_xu_f_v_u64m8_tu(vuint64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u64m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfcvt_f_x_v_f64m1_tu(vfloat64m1_t vd, vint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_x_v_f64m1_tu(vfloat64m1_t vd, vint64m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfcvt_f_x_v_f64m2_tu(vfloat64m2_t vd, vint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_x_v_f64m2_tu(vfloat64m2_t vd, vint64m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfcvt_f_x_v_f64m4_tu(vfloat64m4_t vd, vint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_x_v_f64m4_tu(vfloat64m4_t vd, vint64m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfcvt_f_x_v_f64m8_tu(vfloat64m8_t vd, vint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_x_v_f64m8_tu(vfloat64m8_t vd, vint64m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f64m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfcvt_f_xu_v_f64m1_tu(vfloat64m1_t vd, vuint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_xu_v_f64m1_tu(vfloat64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfcvt_f_xu_v_f64m2_tu(vfloat64m2_t vd, vuint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_xu_v_f64m2_tu(vfloat64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfcvt_f_xu_v_f64m4_tu(vfloat64m4_t vd, vuint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_xu_v_f64m4_tu(vfloat64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfcvt_f_xu_v_f64m8_tu(vfloat64m8_t vd, vuint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_xu_v_f64m8_tu(vfloat64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f64m8_tu(vd, vs2, vl); } -vint16mf4_t test_vfcvt_x_f_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_x_f_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf4_tum(vm, vd, vs2, vl); } -vint16mf2_t test_vfcvt_x_f_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_x_f_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf2_tum(vm, vd, vs2, vl); } -vint16m1_t test_vfcvt_x_f_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_x_f_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m1_tum(vm, vd, vs2, vl); } -vint16m2_t test_vfcvt_x_f_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_x_f_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m2_tum(vm, vd, vs2, vl); } -vint16m4_t test_vfcvt_x_f_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_x_f_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m4_tum(vm, vd, vs2, vl); } -vint16m8_t test_vfcvt_x_f_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_x_f_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vfcvt_xu_f_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_xu_f_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vfcvt_xu_f_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_xu_f_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vfcvt_xu_f_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_xu_f_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vfcvt_xu_f_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_xu_f_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m8_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfcvt_f_x_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_x_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfcvt_f_x_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_x_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfcvt_f_x_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_x_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m4_tum(vm, vd, vs2, vl); } -vfloat16m8_t test_vfcvt_f_x_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_x_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m8_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfcvt_f_xu_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vuint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_xu_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfcvt_f_xu_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vuint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_xu_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfcvt_f_xu_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vuint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_xu_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m4_tum(vm, vd, vs2, vl); } -vfloat16m8_t test_vfcvt_f_xu_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vuint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_xu_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m8_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vfcvt_x_f_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_x_f_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vfcvt_x_f_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_x_f_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vfcvt_x_f_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_x_f_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vfcvt_x_f_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_x_f_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m4_tum(vm, vd, vs2, vl); } -vint32m8_t test_vfcvt_x_f_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_x_f_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vfcvt_xu_f_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_xu_f_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vfcvt_xu_f_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_xu_f_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vfcvt_xu_f_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_xu_f_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vfcvt_xu_f_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_xu_f_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfcvt_f_x_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_x_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfcvt_f_x_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_x_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfcvt_f_x_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_x_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfcvt_f_x_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_x_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfcvt_f_xu_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vuint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_xu_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfcvt_f_xu_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vuint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_xu_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfcvt_f_xu_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vuint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_xu_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfcvt_f_xu_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vuint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_xu_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m8_tum(vm, vd, vs2, vl); } -vint64m1_t test_vfcvt_x_f_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_x_f_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m1_tum(vm, vd, vs2, vl); } -vint64m2_t test_vfcvt_x_f_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_x_f_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m2_tum(vm, vd, vs2, vl); } -vint64m4_t test_vfcvt_x_f_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_x_f_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m4_tum(vm, vd, vs2, vl); } -vint64m8_t test_vfcvt_x_f_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_x_f_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vfcvt_xu_f_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_xu_f_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vfcvt_xu_f_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_xu_f_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vfcvt_xu_f_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_xu_f_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vfcvt_xu_f_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_xu_f_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfcvt_f_x_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_x_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfcvt_f_x_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_x_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfcvt_f_x_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_x_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfcvt_f_x_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_x_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfcvt_f_xu_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vuint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_xu_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfcvt_f_xu_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vuint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_xu_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfcvt_f_xu_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vuint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_xu_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfcvt_f_xu_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vuint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_xu_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m8_tum(vm, vd, vs2, vl); } -vint16mf4_t test_vfcvt_x_f_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_x_f_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf4_tumu(vm, vd, vs2, vl); } -vint16mf2_t test_vfcvt_x_f_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_x_f_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf2_tumu(vm, vd, vs2, vl); } -vint16m1_t test_vfcvt_x_f_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_x_f_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m1_tumu(vm, vd, vs2, vl); } -vint16m2_t test_vfcvt_x_f_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_x_f_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m2_tumu(vm, vd, vs2, vl); } -vint16m4_t test_vfcvt_x_f_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_x_f_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m4_tumu(vm, vd, vs2, vl); } -vint16m8_t test_vfcvt_x_f_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_x_f_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vfcvt_xu_f_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_xu_f_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vfcvt_xu_f_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_xu_f_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vfcvt_xu_f_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_xu_f_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vfcvt_xu_f_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_xu_f_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m8_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfcvt_f_x_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_x_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfcvt_f_x_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_x_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfcvt_f_x_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_x_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfcvt_f_x_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_x_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m8_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfcvt_f_xu_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vuint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_xu_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfcvt_f_xu_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vuint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_xu_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfcvt_f_xu_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vuint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_xu_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfcvt_f_xu_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vuint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_xu_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m8_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vfcvt_x_f_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_x_f_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vfcvt_x_f_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_x_f_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vfcvt_x_f_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_x_f_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vfcvt_x_f_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_x_f_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m4_tumu(vm, vd, vs2, vl); } -vint32m8_t test_vfcvt_x_f_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_x_f_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vfcvt_xu_f_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_xu_f_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vfcvt_xu_f_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_xu_f_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vfcvt_xu_f_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_xu_f_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vfcvt_xu_f_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_xu_f_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfcvt_f_x_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_x_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfcvt_f_x_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_x_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfcvt_f_x_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_x_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfcvt_f_x_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_x_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfcvt_f_xu_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vuint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_xu_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfcvt_f_xu_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vuint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_xu_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfcvt_f_xu_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vuint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_xu_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfcvt_f_xu_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vuint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_xu_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m8_tumu(vm, vd, vs2, vl); } -vint64m1_t test_vfcvt_x_f_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_x_f_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m1_tumu(vm, vd, vs2, vl); } -vint64m2_t test_vfcvt_x_f_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_x_f_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m2_tumu(vm, vd, vs2, vl); } -vint64m4_t test_vfcvt_x_f_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_x_f_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m4_tumu(vm, vd, vs2, vl); } -vint64m8_t test_vfcvt_x_f_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_x_f_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vfcvt_xu_f_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_xu_f_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vfcvt_xu_f_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_xu_f_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vfcvt_xu_f_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_xu_f_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vfcvt_xu_f_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_xu_f_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfcvt_f_x_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_x_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfcvt_f_x_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_x_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfcvt_f_x_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_x_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfcvt_f_x_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_x_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfcvt_f_xu_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vuint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_xu_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfcvt_f_xu_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vuint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_xu_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfcvt_f_xu_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vuint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_xu_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfcvt_f_xu_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vuint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_xu_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m8_tumu(vm, vd, vs2, vl); } -vint16mf4_t test_vfcvt_x_f_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_x_f_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf4_mu(vm, vd, vs2, vl); } -vint16mf2_t test_vfcvt_x_f_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_x_f_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf2_mu(vm, vd, vs2, vl); } -vint16m1_t test_vfcvt_x_f_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_x_f_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m1_mu(vm, vd, vs2, vl); } -vint16m2_t test_vfcvt_x_f_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_x_f_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m2_mu(vm, vd, vs2, vl); } -vint16m4_t test_vfcvt_x_f_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_x_f_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m4_mu(vm, vd, vs2, vl); } -vint16m8_t test_vfcvt_x_f_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_x_f_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vfcvt_xu_f_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_xu_f_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vfcvt_xu_f_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_xu_f_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vfcvt_xu_f_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_xu_f_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vfcvt_xu_f_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_xu_f_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m8_mu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfcvt_f_x_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_x_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfcvt_f_x_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_x_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfcvt_f_x_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_x_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m4_mu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfcvt_f_x_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_x_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m8_mu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfcvt_f_xu_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vuint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_xu_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfcvt_f_xu_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vuint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_xu_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfcvt_f_xu_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vuint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_xu_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m4_mu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfcvt_f_xu_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vuint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_xu_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m8_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vfcvt_x_f_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_x_f_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vfcvt_x_f_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_x_f_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vfcvt_x_f_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_x_f_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vfcvt_x_f_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_x_f_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m4_mu(vm, vd, vs2, vl); } -vint32m8_t test_vfcvt_x_f_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_x_f_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vfcvt_xu_f_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_xu_f_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vfcvt_xu_f_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_xu_f_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vfcvt_xu_f_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_xu_f_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vfcvt_xu_f_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_xu_f_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfcvt_f_x_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_x_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfcvt_f_x_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_x_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfcvt_f_x_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_x_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfcvt_f_x_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_x_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfcvt_f_xu_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vuint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_xu_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfcvt_f_xu_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vuint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_xu_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfcvt_f_xu_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vuint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_xu_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfcvt_f_xu_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vuint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_xu_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m8_mu(vm, vd, vs2, vl); } -vint64m1_t test_vfcvt_x_f_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_x_f_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m1_mu(vm, vd, vs2, vl); } -vint64m2_t test_vfcvt_x_f_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_x_f_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m2_mu(vm, vd, vs2, vl); } -vint64m4_t test_vfcvt_x_f_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_x_f_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m4_mu(vm, vd, vs2, vl); } -vint64m8_t test_vfcvt_x_f_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_x_f_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vfcvt_xu_f_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_xu_f_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vfcvt_xu_f_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_xu_f_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vfcvt_xu_f_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_xu_f_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vfcvt_xu_f_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_xu_f_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfcvt_f_x_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_x_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfcvt_f_x_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_x_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfcvt_f_x_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_x_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfcvt_f_x_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_x_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfcvt_f_xu_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vuint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_xu_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfcvt_f_xu_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vuint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_xu_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfcvt_f_xu_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vuint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_xu_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfcvt_f_xu_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vuint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_xu_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m8_mu(vm, vd, vs2, vl); } -vint16mf4_t test_vfcvt_x_f_v_i16mf4_rm_tu(vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_x_f_v_i16mf4_rm_tu(vint16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf2_t test_vfcvt_x_f_v_i16mf2_rm_tu(vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_x_f_v_i16mf2_rm_tu(vint16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m1_t test_vfcvt_x_f_v_i16m1_rm_tu(vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_x_f_v_i16m1_rm_tu(vint16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m2_t test_vfcvt_x_f_v_i16m2_rm_tu(vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_x_f_v_i16m2_rm_tu(vint16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m4_t test_vfcvt_x_f_v_i16m4_rm_tu(vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_x_f_v_i16m4_rm_tu(vint16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m8_t test_vfcvt_x_f_v_i16m8_rm_tu(vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_x_f_v_i16m8_rm_tu(vint16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i16m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_rm_tu(vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_rm_tu(vuint16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_rm_tu(vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_rm_tu(vuint16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m1_t test_vfcvt_xu_f_v_u16m1_rm_tu(vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_xu_f_v_u16m1_rm_tu(vuint16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m2_t test_vfcvt_xu_f_v_u16m2_rm_tu(vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_xu_f_v_u16m2_rm_tu(vuint16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m4_t test_vfcvt_xu_f_v_u16m4_rm_tu(vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_xu_f_v_u16m4_rm_tu(vuint16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m8_t test_vfcvt_xu_f_v_u16m8_rm_tu(vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_xu_f_v_u16m8_rm_tu(vuint16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u16m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_rm_tu(vfloat16mf4_t vd, vint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_rm_tu(vfloat16mf4_t vd, vint16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_rm_tu(vfloat16mf2_t vd, vint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_rm_tu(vfloat16mf2_t vd, vint16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfcvt_f_x_v_f16m1_rm_tu(vfloat16m1_t vd, vint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_x_v_f16m1_rm_tu(vfloat16m1_t vd, vint16m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfcvt_f_x_v_f16m2_rm_tu(vfloat16m2_t vd, vint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_x_v_f16m2_rm_tu(vfloat16m2_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfcvt_f_x_v_f16m4_rm_tu(vfloat16m4_t vd, vint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_x_v_f16m4_rm_tu(vfloat16m4_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfcvt_f_x_v_f16m8_rm_tu(vfloat16m8_t vd, vint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_x_v_f16m8_rm_tu(vfloat16m8_t vd, vint16m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f16m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_rm_tu(vfloat16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_rm_tu(vfloat16mf4_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_rm_tu(vfloat16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_rm_tu(vfloat16mf2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfcvt_f_xu_v_f16m1_rm_tu(vfloat16m1_t vd, vuint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_xu_v_f16m1_rm_tu(vfloat16m1_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfcvt_f_xu_v_f16m2_rm_tu(vfloat16m2_t vd, vuint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_xu_v_f16m2_rm_tu(vfloat16m2_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfcvt_f_xu_v_f16m4_rm_tu(vfloat16m4_t vd, vuint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_xu_v_f16m4_rm_tu(vfloat16m4_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfcvt_f_xu_v_f16m8_rm_tu(vfloat16m8_t vd, vuint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_xu_v_f16m8_rm_tu(vfloat16m8_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f16m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfcvt_x_f_v_i32mf2_rm_tu(vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_x_f_v_i32mf2_rm_tu(vint32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfcvt_x_f_v_i32m1_rm_tu(vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_x_f_v_i32m1_rm_tu(vint32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfcvt_x_f_v_i32m2_rm_tu(vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_x_f_v_i32m2_rm_tu(vint32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfcvt_x_f_v_i32m4_rm_tu(vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_x_f_v_i32m4_rm_tu(vint32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m8_t test_vfcvt_x_f_v_i32m8_rm_tu(vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_x_f_v_i32m8_rm_tu(vint32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i32m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_rm_tu(vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_rm_tu(vuint32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfcvt_xu_f_v_u32m1_rm_tu(vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_xu_f_v_u32m1_rm_tu(vuint32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfcvt_xu_f_v_u32m2_rm_tu(vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_xu_f_v_u32m2_rm_tu(vuint32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfcvt_xu_f_v_u32m4_rm_tu(vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_xu_f_v_u32m4_rm_tu(vuint32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m8_t test_vfcvt_xu_f_v_u32m8_rm_tu(vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_xu_f_v_u32m8_rm_tu(vuint32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u32m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_rm_tu(vfloat32mf2_t vd, vint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_rm_tu(vfloat32mf2_t vd, vint32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfcvt_f_x_v_f32m1_rm_tu(vfloat32m1_t vd, vint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_x_v_f32m1_rm_tu(vfloat32m1_t vd, vint32m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfcvt_f_x_v_f32m2_rm_tu(vfloat32m2_t vd, vint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_x_v_f32m2_rm_tu(vfloat32m2_t vd, vint32m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfcvt_f_x_v_f32m4_rm_tu(vfloat32m4_t vd, vint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_x_v_f32m4_rm_tu(vfloat32m4_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfcvt_f_x_v_f32m8_rm_tu(vfloat32m8_t vd, vint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_x_v_f32m8_rm_tu(vfloat32m8_t vd, vint32m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f32m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_rm_tu(vfloat32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_rm_tu(vfloat32mf2_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfcvt_f_xu_v_f32m1_rm_tu(vfloat32m1_t vd, vuint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_xu_v_f32m1_rm_tu(vfloat32m1_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfcvt_f_xu_v_f32m2_rm_tu(vfloat32m2_t vd, vuint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_xu_v_f32m2_rm_tu(vfloat32m2_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfcvt_f_xu_v_f32m4_rm_tu(vfloat32m4_t vd, vuint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_xu_v_f32m4_rm_tu(vfloat32m4_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfcvt_f_xu_v_f32m8_rm_tu(vfloat32m8_t vd, vuint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_xu_v_f32m8_rm_tu(vfloat32m8_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f32m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m1_t test_vfcvt_x_f_v_i64m1_rm_tu(vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_x_f_v_i64m1_rm_tu(vint64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i64m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m2_t test_vfcvt_x_f_v_i64m2_rm_tu(vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_x_f_v_i64m2_rm_tu(vint64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i64m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m4_t test_vfcvt_x_f_v_i64m4_rm_tu(vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_x_f_v_i64m4_rm_tu(vint64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i64m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m8_t test_vfcvt_x_f_v_i64m8_rm_tu(vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_x_f_v_i64m8_rm_tu(vint64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfcvt_x_f_v_i64m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m1_t test_vfcvt_xu_f_v_u64m1_rm_tu(vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_xu_f_v_u64m1_rm_tu(vuint64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u64m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m2_t test_vfcvt_xu_f_v_u64m2_rm_tu(vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_xu_f_v_u64m2_rm_tu(vuint64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u64m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m4_t test_vfcvt_xu_f_v_u64m4_rm_tu(vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_xu_f_v_u64m4_rm_tu(vuint64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u64m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m8_t test_vfcvt_xu_f_v_u64m8_rm_tu(vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_xu_f_v_u64m8_rm_tu(vuint64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfcvt_xu_f_v_u64m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfcvt_f_x_v_f64m1_rm_tu(vfloat64m1_t vd, vint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_x_v_f64m1_rm_tu(vfloat64m1_t vd, vint64m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f64m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfcvt_f_x_v_f64m2_rm_tu(vfloat64m2_t vd, vint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_x_v_f64m2_rm_tu(vfloat64m2_t vd, vint64m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f64m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfcvt_f_x_v_f64m4_rm_tu(vfloat64m4_t vd, vint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_x_v_f64m4_rm_tu(vfloat64m4_t vd, vint64m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f64m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfcvt_f_x_v_f64m8_rm_tu(vfloat64m8_t vd, vint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_x_v_f64m8_rm_tu(vfloat64m8_t vd, vint64m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_x_v_f64m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfcvt_f_xu_v_f64m1_rm_tu(vfloat64m1_t vd, vuint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_xu_v_f64m1_rm_tu(vfloat64m1_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f64m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfcvt_f_xu_v_f64m2_rm_tu(vfloat64m2_t vd, vuint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_xu_v_f64m2_rm_tu(vfloat64m2_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f64m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfcvt_f_xu_v_f64m4_rm_tu(vfloat64m4_t vd, vuint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_xu_v_f64m4_rm_tu(vfloat64m4_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f64m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfcvt_f_xu_v_f64m8_rm_tu(vfloat64m8_t vd, vuint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_xu_v_f64m8_rm_tu(vfloat64m8_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vfcvt_f_xu_v_f64m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf4_t test_vfcvt_x_f_v_i16mf4_rm_tum(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_x_f_v_i16mf4_rm_tum(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf2_t test_vfcvt_x_f_v_i16mf2_rm_tum(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_x_f_v_i16mf2_rm_tum(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m1_t test_vfcvt_x_f_v_i16m1_rm_tum(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_x_f_v_i16m1_rm_tum(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m2_t test_vfcvt_x_f_v_i16m2_rm_tum(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_x_f_v_i16m2_rm_tum(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m4_t test_vfcvt_x_f_v_i16m4_rm_tum(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_x_f_v_i16m4_rm_tum(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m8_t test_vfcvt_x_f_v_i16m8_rm_tum(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_x_f_v_i16m8_rm_tum(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_rm_tum(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_rm_tum(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_rm_tum(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_rm_tum(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m1_t test_vfcvt_xu_f_v_u16m1_rm_tum(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_xu_f_v_u16m1_rm_tum(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m2_t test_vfcvt_xu_f_v_u16m2_rm_tum(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_xu_f_v_u16m2_rm_tum(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m4_t test_vfcvt_xu_f_v_u16m4_rm_tum(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_xu_f_v_u16m4_rm_tum(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m8_t test_vfcvt_xu_f_v_u16m8_rm_tum(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_xu_f_v_u16m8_rm_tum(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfcvt_f_x_v_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_x_v_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfcvt_f_x_v_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_x_v_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfcvt_f_x_v_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_x_v_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfcvt_f_x_v_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_x_v_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfcvt_f_xu_v_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vuint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_xu_v_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfcvt_f_xu_v_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vuint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_xu_v_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfcvt_f_xu_v_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vuint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_xu_v_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfcvt_f_xu_v_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vuint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_xu_v_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfcvt_x_f_v_i32mf2_rm_tum(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_x_f_v_i32mf2_rm_tum(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfcvt_x_f_v_i32m1_rm_tum(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_x_f_v_i32m1_rm_tum(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfcvt_x_f_v_i32m2_rm_tum(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_x_f_v_i32m2_rm_tum(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfcvt_x_f_v_i32m4_rm_tum(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_x_f_v_i32m4_rm_tum(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m8_t test_vfcvt_x_f_v_i32m8_rm_tum(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_x_f_v_i32m8_rm_tum(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_rm_tum(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_rm_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfcvt_xu_f_v_u32m1_rm_tum(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_xu_f_v_u32m1_rm_tum(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfcvt_xu_f_v_u32m2_rm_tum(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_xu_f_v_u32m2_rm_tum(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfcvt_xu_f_v_u32m4_rm_tum(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_xu_f_v_u32m4_rm_tum(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m8_t test_vfcvt_xu_f_v_u32m8_rm_tum(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_xu_f_v_u32m8_rm_tum(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfcvt_f_x_v_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_x_v_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfcvt_f_x_v_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_x_v_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfcvt_f_x_v_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_x_v_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfcvt_f_x_v_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_x_v_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfcvt_f_xu_v_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vuint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_xu_v_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfcvt_f_xu_v_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vuint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_xu_v_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfcvt_f_xu_v_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vuint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_xu_v_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfcvt_f_xu_v_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vuint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_xu_v_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m1_t test_vfcvt_x_f_v_i64m1_rm_tum(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_x_f_v_i64m1_rm_tum(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m2_t test_vfcvt_x_f_v_i64m2_rm_tum(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_x_f_v_i64m2_rm_tum(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m4_t test_vfcvt_x_f_v_i64m4_rm_tum(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_x_f_v_i64m4_rm_tum(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m8_t test_vfcvt_x_f_v_i64m8_rm_tum(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_x_f_v_i64m8_rm_tum(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m1_t test_vfcvt_xu_f_v_u64m1_rm_tum(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_xu_f_v_u64m1_rm_tum(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m2_t test_vfcvt_xu_f_v_u64m2_rm_tum(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_xu_f_v_u64m2_rm_tum(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m4_t test_vfcvt_xu_f_v_u64m4_rm_tum(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_xu_f_v_u64m4_rm_tum(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m8_t test_vfcvt_xu_f_v_u64m8_rm_tum(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_xu_f_v_u64m8_rm_tum(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfcvt_f_x_v_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_x_v_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfcvt_f_x_v_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_x_v_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfcvt_f_x_v_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_x_v_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfcvt_f_x_v_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_x_v_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfcvt_f_xu_v_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vuint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_xu_v_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfcvt_f_xu_v_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vuint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_xu_v_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfcvt_f_xu_v_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vuint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_xu_v_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfcvt_f_xu_v_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vuint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_xu_v_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf4_t test_vfcvt_x_f_v_i16mf4_rm_tumu(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_x_f_v_i16mf4_rm_tumu(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf2_t test_vfcvt_x_f_v_i16mf2_rm_tumu(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_x_f_v_i16mf2_rm_tumu(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m1_t test_vfcvt_x_f_v_i16m1_rm_tumu(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_x_f_v_i16m1_rm_tumu(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m2_t test_vfcvt_x_f_v_i16m2_rm_tumu(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_x_f_v_i16m2_rm_tumu(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m4_t test_vfcvt_x_f_v_i16m4_rm_tumu(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_x_f_v_i16m4_rm_tumu(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m8_t test_vfcvt_x_f_v_i16m8_rm_tumu(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_x_f_v_i16m8_rm_tumu(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_rm_tumu(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_rm_tumu(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_rm_tumu(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_rm_tumu(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m1_t test_vfcvt_xu_f_v_u16m1_rm_tumu(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_xu_f_v_u16m1_rm_tumu(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m2_t test_vfcvt_xu_f_v_u16m2_rm_tumu(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_xu_f_v_u16m2_rm_tumu(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m4_t test_vfcvt_xu_f_v_u16m4_rm_tumu(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_xu_f_v_u16m4_rm_tumu(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m8_t test_vfcvt_xu_f_v_u16m8_rm_tumu(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_xu_f_v_u16m8_rm_tumu(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfcvt_f_x_v_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_x_v_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfcvt_f_x_v_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_x_v_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfcvt_f_x_v_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_x_v_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfcvt_f_x_v_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_x_v_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfcvt_f_xu_v_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vuint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_xu_v_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfcvt_f_xu_v_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vuint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_xu_v_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfcvt_f_xu_v_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vuint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_xu_v_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfcvt_f_xu_v_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vuint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_xu_v_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfcvt_x_f_v_i32mf2_rm_tumu(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_x_f_v_i32mf2_rm_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfcvt_x_f_v_i32m1_rm_tumu(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_x_f_v_i32m1_rm_tumu(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfcvt_x_f_v_i32m2_rm_tumu(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_x_f_v_i32m2_rm_tumu(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfcvt_x_f_v_i32m4_rm_tumu(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_x_f_v_i32m4_rm_tumu(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m8_t test_vfcvt_x_f_v_i32m8_rm_tumu(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_x_f_v_i32m8_rm_tumu(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_rm_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_rm_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfcvt_xu_f_v_u32m1_rm_tumu(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_xu_f_v_u32m1_rm_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfcvt_xu_f_v_u32m2_rm_tumu(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_xu_f_v_u32m2_rm_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfcvt_xu_f_v_u32m4_rm_tumu(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_xu_f_v_u32m4_rm_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m8_t test_vfcvt_xu_f_v_u32m8_rm_tumu(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_xu_f_v_u32m8_rm_tumu(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfcvt_f_x_v_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_x_v_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfcvt_f_x_v_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_x_v_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfcvt_f_x_v_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_x_v_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfcvt_f_x_v_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_x_v_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfcvt_f_xu_v_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vuint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_xu_v_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfcvt_f_xu_v_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vuint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_xu_v_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfcvt_f_xu_v_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vuint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_xu_v_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfcvt_f_xu_v_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vuint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_xu_v_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m1_t test_vfcvt_x_f_v_i64m1_rm_tumu(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_x_f_v_i64m1_rm_tumu(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m2_t test_vfcvt_x_f_v_i64m2_rm_tumu(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_x_f_v_i64m2_rm_tumu(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m4_t test_vfcvt_x_f_v_i64m4_rm_tumu(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_x_f_v_i64m4_rm_tumu(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m8_t test_vfcvt_x_f_v_i64m8_rm_tumu(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_x_f_v_i64m8_rm_tumu(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m1_t test_vfcvt_xu_f_v_u64m1_rm_tumu(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_xu_f_v_u64m1_rm_tumu(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m2_t test_vfcvt_xu_f_v_u64m2_rm_tumu(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_xu_f_v_u64m2_rm_tumu(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m4_t test_vfcvt_xu_f_v_u64m4_rm_tumu(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_xu_f_v_u64m4_rm_tumu(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m8_t test_vfcvt_xu_f_v_u64m8_rm_tumu(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_xu_f_v_u64m8_rm_tumu(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfcvt_f_x_v_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_x_v_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfcvt_f_x_v_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_x_v_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfcvt_f_x_v_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_x_v_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfcvt_f_x_v_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_x_v_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfcvt_f_xu_v_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vuint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_xu_v_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfcvt_f_xu_v_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vuint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_xu_v_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfcvt_f_xu_v_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vuint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_xu_v_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfcvt_f_xu_v_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vuint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_xu_v_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf4_t test_vfcvt_x_f_v_i16mf4_rm_mu(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_x_f_v_i16mf4_rm_mu(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf2_t test_vfcvt_x_f_v_i16mf2_rm_mu(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_x_f_v_i16mf2_rm_mu(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m1_t test_vfcvt_x_f_v_i16m1_rm_mu(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_x_f_v_i16m1_rm_mu(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m2_t test_vfcvt_x_f_v_i16m2_rm_mu(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_x_f_v_i16m2_rm_mu(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m4_t test_vfcvt_x_f_v_i16m4_rm_mu(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_x_f_v_i16m4_rm_mu(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m8_t test_vfcvt_x_f_v_i16m8_rm_mu(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_x_f_v_i16m8_rm_mu(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i16m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_rm_mu(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_xu_f_v_u16mf4_rm_mu(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_rm_mu(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_xu_f_v_u16mf2_rm_mu(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m1_t test_vfcvt_xu_f_v_u16m1_rm_mu(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_xu_f_v_u16m1_rm_mu(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m2_t test_vfcvt_xu_f_v_u16m2_rm_mu(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_xu_f_v_u16m2_rm_mu(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m4_t test_vfcvt_xu_f_v_u16m4_rm_mu(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_xu_f_v_u16m4_rm_mu(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m8_t test_vfcvt_xu_f_v_u16m8_rm_mu(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_xu_f_v_u16m8_rm_mu(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u16m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_x_v_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_x_v_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfcvt_f_x_v_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_x_v_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfcvt_f_x_v_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_x_v_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfcvt_f_x_v_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_x_v_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfcvt_f_x_v_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_x_v_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f16m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfcvt_f_xu_v_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfcvt_f_xu_v_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfcvt_f_xu_v_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vuint16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfcvt_f_xu_v_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfcvt_f_xu_v_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vuint16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfcvt_f_xu_v_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfcvt_f_xu_v_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vuint16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfcvt_f_xu_v_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfcvt_f_xu_v_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vuint16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfcvt_f_xu_v_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f16m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfcvt_x_f_v_i32mf2_rm_mu(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_x_f_v_i32mf2_rm_mu(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfcvt_x_f_v_i32m1_rm_mu(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_x_f_v_i32m1_rm_mu(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfcvt_x_f_v_i32m2_rm_mu(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_x_f_v_i32m2_rm_mu(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfcvt_x_f_v_i32m4_rm_mu(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_x_f_v_i32m4_rm_mu(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m8_t test_vfcvt_x_f_v_i32m8_rm_mu(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_x_f_v_i32m8_rm_mu(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i32m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_rm_mu(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_xu_f_v_u32mf2_rm_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfcvt_xu_f_v_u32m1_rm_mu(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_xu_f_v_u32m1_rm_mu(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfcvt_xu_f_v_u32m2_rm_mu(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_xu_f_v_u32m2_rm_mu(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfcvt_xu_f_v_u32m4_rm_mu(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_xu_f_v_u32m4_rm_mu(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m8_t test_vfcvt_xu_f_v_u32m8_rm_mu(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_xu_f_v_u32m8_rm_mu(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u32m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_x_v_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfcvt_f_x_v_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_x_v_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfcvt_f_x_v_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_x_v_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfcvt_f_x_v_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_x_v_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfcvt_f_x_v_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_x_v_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f32m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfcvt_f_xu_v_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfcvt_f_xu_v_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vuint32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfcvt_f_xu_v_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfcvt_f_xu_v_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vuint32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfcvt_f_xu_v_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfcvt_f_xu_v_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vuint32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfcvt_f_xu_v_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfcvt_f_xu_v_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vuint32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfcvt_f_xu_v_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f32m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m1_t test_vfcvt_x_f_v_i64m1_rm_mu(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_x_f_v_i64m1_rm_mu(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m2_t test_vfcvt_x_f_v_i64m2_rm_mu(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_x_f_v_i64m2_rm_mu(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m4_t test_vfcvt_x_f_v_i64m4_rm_mu(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_x_f_v_i64m4_rm_mu(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m8_t test_vfcvt_x_f_v_i64m8_rm_mu(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_x_f_v_i64m8_rm_mu(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_x_f_v_i64m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m1_t test_vfcvt_xu_f_v_u64m1_rm_mu(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_xu_f_v_u64m1_rm_mu(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m2_t test_vfcvt_xu_f_v_u64m2_rm_mu(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_xu_f_v_u64m2_rm_mu(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m4_t test_vfcvt_xu_f_v_u64m4_rm_mu(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_xu_f_v_u64m4_rm_mu(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m8_t test_vfcvt_xu_f_v_u64m8_rm_mu(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_xu_f_v_u64m8_rm_mu(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_xu_f_v_u64m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfcvt_f_x_v_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_x_v_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfcvt_f_x_v_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_x_v_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfcvt_f_x_v_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_x_v_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfcvt_f_x_v_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_x_v_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_x_v_f64m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfcvt_f_xu_v_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vuint64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfcvt_f_xu_v_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfcvt_f_xu_v_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vuint64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfcvt_f_xu_v_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfcvt_f_xu_v_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vuint64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfcvt_f_xu_v_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfcvt_f_xu_v_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vuint64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfcvt_f_xu_v_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfcvt_f_xu_v_f64m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c b/auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c index a2ece9d38..3d5bad3c1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c @@ -6,482 +6,602 @@ #include -vint16mf4_t test_vfcvt_rtz_x_f_v_i16mf4_tu(vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_rtz_x_f_v_i16mf4_tu(vint16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16mf4_tu(vd, vs2, vl); } -vint16mf2_t test_vfcvt_rtz_x_f_v_i16mf2_tu(vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_rtz_x_f_v_i16mf2_tu(vint16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16mf2_tu(vd, vs2, vl); } -vint16m1_t test_vfcvt_rtz_x_f_v_i16m1_tu(vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_rtz_x_f_v_i16m1_tu(vint16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m1_tu(vd, vs2, vl); } -vint16m2_t test_vfcvt_rtz_x_f_v_i16m2_tu(vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_rtz_x_f_v_i16m2_tu(vint16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m2_tu(vd, vs2, vl); } -vint16m4_t test_vfcvt_rtz_x_f_v_i16m4_tu(vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_rtz_x_f_v_i16m4_tu(vint16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m4_tu(vd, vs2, vl); } -vint16m8_t test_vfcvt_rtz_x_f_v_i16m8_tu(vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_rtz_x_f_v_i16m8_tu(vint16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vfcvt_rtz_xu_f_v_u16mf4_tu(vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_rtz_xu_f_v_u16mf4_tu(vuint16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vfcvt_rtz_xu_f_v_u16mf2_tu(vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_rtz_xu_f_v_u16mf2_tu(vuint16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vfcvt_rtz_xu_f_v_u16m1_tu(vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_rtz_xu_f_v_u16m1_tu(vuint16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vfcvt_rtz_xu_f_v_u16m2_tu(vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_rtz_xu_f_v_u16m2_tu(vuint16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vfcvt_rtz_xu_f_v_u16m4_tu(vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_rtz_xu_f_v_u16m4_tu(vuint16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vfcvt_rtz_xu_f_v_u16m8_tu(vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_rtz_xu_f_v_u16m8_tu(vuint16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m8_tu(vd, vs2, vl); } -vint32mf2_t test_vfcvt_rtz_x_f_v_i32mf2_tu(vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_rtz_x_f_v_i32mf2_tu(vint32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32mf2_tu(vd, vs2, vl); } -vint32m1_t test_vfcvt_rtz_x_f_v_i32m1_tu(vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_rtz_x_f_v_i32m1_tu(vint32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m1_tu(vd, vs2, vl); } -vint32m2_t test_vfcvt_rtz_x_f_v_i32m2_tu(vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_rtz_x_f_v_i32m2_tu(vint32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m2_tu(vd, vs2, vl); } -vint32m4_t test_vfcvt_rtz_x_f_v_i32m4_tu(vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_rtz_x_f_v_i32m4_tu(vint32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m4_tu(vd, vs2, vl); } -vint32m8_t test_vfcvt_rtz_x_f_v_i32m8_tu(vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_rtz_x_f_v_i32m8_tu(vint32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vfcvt_rtz_xu_f_v_u32mf2_tu(vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_rtz_xu_f_v_u32mf2_tu(vuint32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vfcvt_rtz_xu_f_v_u32m1_tu(vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_rtz_xu_f_v_u32m1_tu(vuint32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vfcvt_rtz_xu_f_v_u32m2_tu(vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_rtz_xu_f_v_u32m2_tu(vuint32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vfcvt_rtz_xu_f_v_u32m4_tu(vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_rtz_xu_f_v_u32m4_tu(vuint32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vfcvt_rtz_xu_f_v_u32m8_tu(vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_rtz_xu_f_v_u32m8_tu(vuint32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m8_tu(vd, vs2, vl); } -vint64m1_t test_vfcvt_rtz_x_f_v_i64m1_tu(vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_rtz_x_f_v_i64m1_tu(vint64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m1_tu(vd, vs2, vl); } -vint64m2_t test_vfcvt_rtz_x_f_v_i64m2_tu(vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_rtz_x_f_v_i64m2_tu(vint64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m2_tu(vd, vs2, vl); } -vint64m4_t test_vfcvt_rtz_x_f_v_i64m4_tu(vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_rtz_x_f_v_i64m4_tu(vint64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m4_tu(vd, vs2, vl); } -vint64m8_t test_vfcvt_rtz_x_f_v_i64m8_tu(vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_rtz_x_f_v_i64m8_tu(vint64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m8_tu(vd, vs2, vl); } -vuint64m1_t test_vfcvt_rtz_xu_f_v_u64m1_tu(vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_rtz_xu_f_v_u64m1_tu(vuint64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vfcvt_rtz_xu_f_v_u64m2_tu(vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_rtz_xu_f_v_u64m2_tu(vuint64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vfcvt_rtz_xu_f_v_u64m4_tu(vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_rtz_xu_f_v_u64m4_tu(vuint64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vfcvt_rtz_xu_f_v_u64m8_tu(vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_rtz_xu_f_v_u64m8_tu(vuint64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m8_tu(vd, vs2, vl); } -vint16mf4_t test_vfcvt_rtz_x_f_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_rtz_x_f_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16mf4_tum(vm, vd, vs2, vl); } -vint16mf2_t test_vfcvt_rtz_x_f_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_rtz_x_f_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16mf2_tum(vm, vd, vs2, vl); } -vint16m1_t test_vfcvt_rtz_x_f_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_rtz_x_f_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m1_tum(vm, vd, vs2, vl); } -vint16m2_t test_vfcvt_rtz_x_f_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_rtz_x_f_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m2_tum(vm, vd, vs2, vl); } -vint16m4_t test_vfcvt_rtz_x_f_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_rtz_x_f_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m4_tum(vm, vd, vs2, vl); } -vint16m8_t test_vfcvt_rtz_x_f_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_rtz_x_f_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vfcvt_rtz_xu_f_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_rtz_xu_f_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vfcvt_rtz_xu_f_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_rtz_xu_f_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vfcvt_rtz_xu_f_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_rtz_xu_f_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vfcvt_rtz_xu_f_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_rtz_xu_f_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vfcvt_rtz_xu_f_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_rtz_xu_f_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vfcvt_rtz_xu_f_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_rtz_xu_f_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m8_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vfcvt_rtz_x_f_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_rtz_x_f_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vfcvt_rtz_x_f_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_rtz_x_f_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vfcvt_rtz_x_f_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_rtz_x_f_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vfcvt_rtz_x_f_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_rtz_x_f_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m4_tum(vm, vd, vs2, vl); } -vint32m8_t test_vfcvt_rtz_x_f_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_rtz_x_f_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vfcvt_rtz_xu_f_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_rtz_xu_f_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vfcvt_rtz_xu_f_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_rtz_xu_f_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vfcvt_rtz_xu_f_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_rtz_xu_f_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vfcvt_rtz_xu_f_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_rtz_xu_f_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vfcvt_rtz_xu_f_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_rtz_xu_f_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m8_tum(vm, vd, vs2, vl); } -vint64m1_t test_vfcvt_rtz_x_f_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_rtz_x_f_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m1_tum(vm, vd, vs2, vl); } -vint64m2_t test_vfcvt_rtz_x_f_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_rtz_x_f_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m2_tum(vm, vd, vs2, vl); } -vint64m4_t test_vfcvt_rtz_x_f_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_rtz_x_f_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m4_tum(vm, vd, vs2, vl); } -vint64m8_t test_vfcvt_rtz_x_f_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_rtz_x_f_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vfcvt_rtz_xu_f_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_rtz_xu_f_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vfcvt_rtz_xu_f_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_rtz_xu_f_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vfcvt_rtz_xu_f_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_rtz_xu_f_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vfcvt_rtz_xu_f_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_rtz_xu_f_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m8_tum(vm, vd, vs2, vl); } -vint16mf4_t test_vfcvt_rtz_x_f_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_rtz_x_f_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16mf4_tumu(vm, vd, vs2, vl); } -vint16mf2_t test_vfcvt_rtz_x_f_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_rtz_x_f_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16mf2_tumu(vm, vd, vs2, vl); } -vint16m1_t test_vfcvt_rtz_x_f_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_rtz_x_f_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m1_tumu(vm, vd, vs2, vl); } -vint16m2_t test_vfcvt_rtz_x_f_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_rtz_x_f_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m2_tumu(vm, vd, vs2, vl); } -vint16m4_t test_vfcvt_rtz_x_f_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_rtz_x_f_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m4_tumu(vm, vd, vs2, vl); } -vint16m8_t test_vfcvt_rtz_x_f_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_rtz_x_f_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfcvt_rtz_xu_f_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_rtz_xu_f_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfcvt_rtz_xu_f_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_rtz_xu_f_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vfcvt_rtz_xu_f_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_rtz_xu_f_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vfcvt_rtz_xu_f_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_rtz_xu_f_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vfcvt_rtz_xu_f_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_rtz_xu_f_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vfcvt_rtz_xu_f_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_rtz_xu_f_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m8_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vfcvt_rtz_x_f_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_rtz_x_f_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vfcvt_rtz_x_f_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_rtz_x_f_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vfcvt_rtz_x_f_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_rtz_x_f_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vfcvt_rtz_x_f_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_rtz_x_f_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m4_tumu(vm, vd, vs2, vl); } -vint32m8_t test_vfcvt_rtz_x_f_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_rtz_x_f_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfcvt_rtz_xu_f_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_rtz_xu_f_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vfcvt_rtz_xu_f_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_rtz_xu_f_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vfcvt_rtz_xu_f_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_rtz_xu_f_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vfcvt_rtz_xu_f_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_rtz_xu_f_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vfcvt_rtz_xu_f_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_rtz_xu_f_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m8_tumu(vm, vd, vs2, vl); } -vint64m1_t test_vfcvt_rtz_x_f_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_rtz_x_f_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m1_tumu(vm, vd, vs2, vl); } -vint64m2_t test_vfcvt_rtz_x_f_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_rtz_x_f_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m2_tumu(vm, vd, vs2, vl); } -vint64m4_t test_vfcvt_rtz_x_f_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_rtz_x_f_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m4_tumu(vm, vd, vs2, vl); } -vint64m8_t test_vfcvt_rtz_x_f_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_rtz_x_f_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vfcvt_rtz_xu_f_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_rtz_xu_f_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vfcvt_rtz_xu_f_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_rtz_xu_f_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vfcvt_rtz_xu_f_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_rtz_xu_f_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vfcvt_rtz_xu_f_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_rtz_xu_f_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m8_tumu(vm, vd, vs2, vl); } -vint16mf4_t test_vfcvt_rtz_x_f_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vint16mf4_t test_vfcvt_rtz_x_f_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16mf4_mu(vm, vd, vs2, vl); } -vint16mf2_t test_vfcvt_rtz_x_f_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vint16mf2_t test_vfcvt_rtz_x_f_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16mf2_mu(vm, vd, vs2, vl); } -vint16m1_t test_vfcvt_rtz_x_f_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vint16m1_t test_vfcvt_rtz_x_f_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m1_mu(vm, vd, vs2, vl); } -vint16m2_t test_vfcvt_rtz_x_f_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vint16m2_t test_vfcvt_rtz_x_f_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m2_mu(vm, vd, vs2, vl); } -vint16m4_t test_vfcvt_rtz_x_f_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vint16m4_t test_vfcvt_rtz_x_f_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m4_mu(vm, vd, vs2, vl); } -vint16m8_t test_vfcvt_rtz_x_f_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vint16m8_t test_vfcvt_rtz_x_f_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i16m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfcvt_rtz_xu_f_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vfcvt_rtz_xu_f_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfcvt_rtz_xu_f_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vfcvt_rtz_xu_f_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vfcvt_rtz_xu_f_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vuint16m1_t test_vfcvt_rtz_xu_f_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vfcvt_rtz_xu_f_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vuint16m2_t test_vfcvt_rtz_xu_f_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vfcvt_rtz_xu_f_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vuint16m4_t test_vfcvt_rtz_xu_f_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vfcvt_rtz_xu_f_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vuint16m8_t test_vfcvt_rtz_xu_f_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u16m8_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vfcvt_rtz_x_f_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vint32mf2_t test_vfcvt_rtz_x_f_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vfcvt_rtz_x_f_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vint32m1_t test_vfcvt_rtz_x_f_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vfcvt_rtz_x_f_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vint32m2_t test_vfcvt_rtz_x_f_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vfcvt_rtz_x_f_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vint32m4_t test_vfcvt_rtz_x_f_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m4_mu(vm, vd, vs2, vl); } -vint32m8_t test_vfcvt_rtz_x_f_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vint32m8_t test_vfcvt_rtz_x_f_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i32m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfcvt_rtz_xu_f_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vfcvt_rtz_xu_f_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vfcvt_rtz_xu_f_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vuint32m1_t test_vfcvt_rtz_xu_f_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vfcvt_rtz_xu_f_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vuint32m2_t test_vfcvt_rtz_xu_f_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vfcvt_rtz_xu_f_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vuint32m4_t test_vfcvt_rtz_xu_f_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vfcvt_rtz_xu_f_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vuint32m8_t test_vfcvt_rtz_xu_f_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u32m8_mu(vm, vd, vs2, vl); } -vint64m1_t test_vfcvt_rtz_x_f_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vint64m1_t test_vfcvt_rtz_x_f_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m1_mu(vm, vd, vs2, vl); } -vint64m2_t test_vfcvt_rtz_x_f_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vint64m2_t test_vfcvt_rtz_x_f_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m2_mu(vm, vd, vs2, vl); } -vint64m4_t test_vfcvt_rtz_x_f_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vint64m4_t test_vfcvt_rtz_x_f_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m4_mu(vm, vd, vs2, vl); } -vint64m8_t test_vfcvt_rtz_x_f_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vint64m8_t test_vfcvt_rtz_x_f_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_x_f_v_i64m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vfcvt_rtz_xu_f_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vuint64m1_t test_vfcvt_rtz_xu_f_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vfcvt_rtz_xu_f_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vuint64m2_t test_vfcvt_rtz_xu_f_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vfcvt_rtz_xu_f_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vuint64m4_t test_vfcvt_rtz_xu_f_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vfcvt_rtz_xu_f_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vuint64m8_t test_vfcvt_rtz_xu_f_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfcvt_rtz_xu_f_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfdiv.c b/auto-generated/policy_funcs/llvm-api-tests/vfdiv.c index 46c1039dd..6b39dc20a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfdiv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfdiv.c @@ -6,962 +6,1349 @@ #include -vfloat16mf4_t test_vfdiv_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfdiv_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfdiv_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfdiv_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfdiv_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfdiv_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfdiv_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfdiv_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfdiv_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfdiv_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfdiv_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfdiv_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfdiv_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfdiv_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfdiv_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfdiv_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfdiv_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfdiv_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfdiv_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfdiv_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfdiv_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfdiv_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfdiv_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfdiv_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfdiv_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfdiv_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfdiv_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfdiv_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfdiv_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfdiv_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfdiv_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfdiv_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfdiv_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfdiv_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfdiv_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfdiv_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfdiv_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfdiv_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfdiv_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfdiv_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfdiv_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfdiv_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfdiv_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfdiv_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfdiv_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfdiv_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfdiv_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfdiv_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfdiv_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfdiv_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfdiv_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfdiv_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfdiv_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfdiv_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfdiv_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfdiv_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfdiv_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfdiv_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfdiv_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfdiv_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfdiv_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfdiv_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfdiv_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfdiv_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfdiv_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfdiv_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfdiv_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfdiv_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfdiv_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfdiv_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfdiv_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfdiv_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfdiv_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfdiv_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfdiv_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfdiv_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfdiv_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfdiv_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfdiv_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfdiv_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfdiv_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfdiv_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfdiv_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfdiv_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfdiv_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfdiv_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfdiv_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfdiv_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfdiv_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfdiv_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfdiv_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfdiv_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfdiv_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfdiv_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfdiv_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfdiv_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfdiv_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfdiv_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfdiv_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfdiv_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfdiv_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfdiv_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfdiv_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfdiv_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfdiv_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfdiv_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfdiv_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfdiv_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfdiv_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfdiv_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfdiv_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfdiv_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfdiv_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfdiv_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfdiv_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfdiv_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfdiv_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfdiv_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfdiv_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfdiv_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfdiv_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfdiv_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfdiv_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfdiv_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfdiv_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfdiv_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfdiv_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfdiv_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfdiv_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfdiv_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfdiv_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfdiv_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfdiv_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfdiv_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfdiv_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfdiv_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfdiv_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfdiv_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfdiv_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfdiv_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfdiv_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfdiv_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfdiv_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfdiv_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfdiv_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfdiv_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfdiv_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfdiv_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfdiv_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfdiv_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfdiv_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfdiv_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfdiv_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfdiv_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfdiv_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfdiv_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfdiv_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfdiv_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfdiv_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfdiv_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfdiv_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfdiv_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfdiv_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfdiv_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfdiv_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfdiv_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfdiv_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfdiv_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfdiv_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfdiv_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfdiv_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfdiv_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfdiv_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfdiv_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfdiv_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfdiv_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfdiv_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfdiv_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfdiv_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfdiv_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfdiv_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfdiv_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfdiv_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfdiv_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfdiv_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfdiv_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfdiv_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfdiv_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfdiv_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfdiv_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfdiv_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfdiv_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfdiv_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfdiv_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfdiv_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfdiv_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfdiv_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfdiv_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfdiv_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfdiv_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfdiv_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfdiv_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfdiv_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfdiv_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfdiv_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfdiv_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfdiv_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfdiv_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfdiv_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfdiv_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfdiv_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfdiv_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfdiv_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfdiv_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfdiv_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfdiv_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfdiv_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfdiv_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfdiv_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfdiv_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16mf4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfdiv_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16mf4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfdiv_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfdiv_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfdiv_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfdiv_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfdiv_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfdiv_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfdiv_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfdiv_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfdiv_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfdiv_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfdiv_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfdiv_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfdiv_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfdiv_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfdiv_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfdiv_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfdiv_vv_f16m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfdiv_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfdiv_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfdiv_vf_f16m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfdiv_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfdiv_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfdiv_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfdiv_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfdiv_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfdiv_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfdiv_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfdiv_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfdiv_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfdiv_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfdiv_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfdiv_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfdiv_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfdiv_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfdiv_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfdiv_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfdiv_vv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfdiv_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfdiv_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfdiv_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfdiv_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfdiv_vv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfdiv_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfdiv_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfdiv_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfdiv_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfdiv_vv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfdiv_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfdiv_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfdiv_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfdiv_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfdiv_vv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfdiv_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfdiv_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfdiv_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfdiv_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfdiv_vv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfdiv_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfdiv_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfdiv_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfdiv_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfdiv_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfdiv_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfdiv_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfdiv_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfdiv_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfdiv_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfdiv_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfdiv_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfdiv_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfdiv_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfdiv_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfdiv_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfdiv_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfdiv_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfdiv_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfdiv_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfdiv_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfdiv_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfdiv_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfdiv_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfdiv_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfdiv_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfdiv_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfdiv_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfdiv_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfdiv_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfdiv_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfdiv_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfdiv_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfdiv_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfdiv_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfdiv_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfdiv_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfdiv_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfdiv_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfdiv_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfdiv_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfdiv_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfdiv_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfdiv_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfdiv_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfdiv_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfdiv_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfdiv_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfdiv_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfdiv_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfdiv_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfdiv_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfdiv_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfdiv_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfdiv_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfdiv_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfdiv_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfdiv_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfdiv_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfdiv_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfdiv_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfdiv_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfdiv_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfdiv_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfdiv_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfdiv_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfdiv_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfdiv_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfdiv_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfdiv_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfdiv_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfdiv_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfdiv_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfdiv_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfdiv_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfdiv_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfdiv_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfdiv_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfdiv_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfdiv_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfdiv_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfdiv_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfdiv_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfdiv_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfdiv_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfdiv_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfdiv_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfdiv_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfdiv_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfdiv_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfdiv_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfdiv_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfdiv_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfdiv_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfdiv_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfdiv_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfdiv_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfdiv_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfdiv_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfdiv_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfdiv_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfdiv_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfdiv_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfdiv_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfdiv_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfdiv_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfdiv_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfdiv_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfdiv_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfdiv_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfdiv_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfdiv_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfdiv_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfdiv_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfdiv_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfdiv_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfdiv_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfdiv_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfdiv_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfdiv_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfdiv_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfdiv_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfdiv_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfdiv_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfdiv_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfdiv_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfdiv_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfdiv_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfdiv_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfdiv_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfdiv_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f16m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfdiv_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfdiv_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfdiv_vf_f16m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfdiv_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfdiv_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfdiv_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfdiv_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfdiv_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfdiv_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfdiv_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfdiv_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfdiv_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfdiv_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfdiv_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfdiv_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfdiv_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfdiv_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfdiv_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfdiv_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfdiv_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfdiv_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfdiv_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfdiv_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfdiv_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfdiv_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfdiv_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfdiv_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfdiv_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfdiv_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfdiv_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfdiv_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfdiv_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfdiv_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfdiv_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfdiv_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfdiv_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfdiv_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfdiv_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfdiv_vv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfdiv_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfdiv_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfdiv_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vfmacc.c index 98c597f81..567b7dadb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmacc.c @@ -6,962 +6,1362 @@ #include -vfloat16mf4_t test_vfmacc_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16mf4_tu(vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmacc_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16mf4_tu(vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmacc_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16mf2_tu(vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmacc_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16mf2_tu(vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmacc_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16m1_tu(vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmacc_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m1_tu(vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmacc_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16m2_tu(vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmacc_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m2_tu(vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmacc_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16m4_tu(vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmacc_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m4_tu(vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmacc_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16m8_tu(vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmacc_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m8_tu(vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmacc_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmacc_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32mf2_tu(vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmacc_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmacc_vf_f32m1_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vf_f32m1_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m1_tu(vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmacc_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmacc_vf_f32m2_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vf_f32m2_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m2_tu(vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmacc_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmacc_vf_f32m4_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vf_f32m4_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m4_tu(vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmacc_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmacc_vf_f32m8_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vf_f32m8_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m8_tu(vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmacc_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmacc_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmacc_vf_f64m1_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vf_f64m1_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m1_tu(vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmacc_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmacc_vf_f64m2_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vf_f64m2_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m2_tu(vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmacc_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmacc_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmacc_vf_f64m4_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vf_f64m4_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m4_tu(vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmacc_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmacc_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmacc_vf_f64m8_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vf_f64m8_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m8_tu(vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmacc_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf4_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmacc_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf4_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmacc_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmacc_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmacc_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m1_tum(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmacc_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m1_tum(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmacc_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m2_tum(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmacc_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmacc_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m4_tum(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmacc_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m4_tum(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmacc_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m8_tum(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmacc_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m8_tum(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmacc_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmacc_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmacc_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmacc_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m1_tum(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmacc_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmacc_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmacc_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmacc_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m4_tum(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmacc_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmacc_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m8_tum(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmacc_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmacc_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m1_tum(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmacc_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmacc_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m2_tum(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmacc_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmacc_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m4_tum(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmacc_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmacc_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m8_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmacc_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmacc_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmacc_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmacc_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmacc_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmacc_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmacc_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmacc_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmacc_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmacc_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmacc_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmacc_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmacc_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmacc_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmacc_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmacc_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmacc_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmacc_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmacc_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmacc_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmacc_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmacc_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmacc_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmacc_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmacc_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmacc_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmacc_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmacc_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmacc_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmacc_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmacc_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf4_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmacc_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf4_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmacc_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmacc_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmacc_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m1_mu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmacc_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m1_mu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmacc_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m2_mu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmacc_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmacc_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m4_mu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmacc_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m4_mu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmacc_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m8_mu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmacc_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m8_mu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmacc_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmacc_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmacc_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmacc_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m1_mu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmacc_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmacc_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmacc_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmacc_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m4_mu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmacc_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmacc_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m8_mu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmacc_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmacc_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m1_mu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmacc_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmacc_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m2_mu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmacc_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmacc_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m4_mu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmacc_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmacc_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m8_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmacc_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16mf4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmacc_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16mf4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmacc_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmacc_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmacc_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmacc_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmacc_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmacc_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmacc_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmacc_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmacc_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmacc_vv_f16m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmacc_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f16m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmacc_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmacc_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmacc_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmacc_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmacc_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmacc_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmacc_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmacc_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmacc_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmacc_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmacc_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmacc_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmacc_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmacc_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmacc_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmacc_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmacc_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmacc_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmacc_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmacc_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmacc_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmacc_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmacc_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f64m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmacc_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmacc_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmacc_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmacc_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmacc_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmacc_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmacc_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmacc_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmacc_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmacc_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmacc_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmacc_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmacc_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmacc_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmacc_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmacc_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmacc_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmacc_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmacc_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmacc_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmacc_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmacc_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmacc_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmacc_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmacc_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmacc_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmacc_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmacc_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmacc_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmacc_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmacc_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfmacc_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfmacc_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfmacc_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfmacc_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfmacc_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfmacc_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfmacc_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfmacc_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfmacc_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfmacc_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfmacc_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfmacc_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfmacc_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfmacc_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfmacc_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfmacc_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmacc_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmacc_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmacc_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmacc_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmacc_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmacc_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmacc_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmacc_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfmacc_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfmacc_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfmacc_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfmacc_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfmacc_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfmacc_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfmacc_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfmacc_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmacc_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmacc_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmacc_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmacc_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmacc_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmacc_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmacc_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmacc_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmacc_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmacc_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmacc_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmacc_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmacc_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmacc_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmacc_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmacc_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmacc_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmacc_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmacc_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmacc_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmacc_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmacc_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmacc_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmacc_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmacc_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmacc_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmacc_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmacc_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmacc_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmacc_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmacc_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f16m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmacc_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmacc_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f16m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmacc_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmacc_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmacc_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmacc_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmacc_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmacc_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmacc_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmacc_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmacc_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f32m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmacc_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmacc_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmacc_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmacc_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmacc_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmacc_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmacc_vf_f32m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmacc_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmacc_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmacc_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmacc_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmacc_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmacc_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmacc_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmacc_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmacc_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmacc_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmacc_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmacc_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmacc_vf_f64m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmadd.c b/auto-generated/policy_funcs/llvm-api-tests/vfmadd.c index 695b9f505..0f3e25db7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmadd.c @@ -6,962 +6,1362 @@ #include -vfloat16mf4_t test_vfmadd_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16mf4_tu(vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmadd_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16mf4_tu(vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmadd_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16mf2_tu(vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmadd_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16mf2_tu(vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmadd_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16m1_tu(vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmadd_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m1_tu(vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmadd_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16m2_tu(vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmadd_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m2_tu(vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmadd_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16m4_tu(vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmadd_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m4_tu(vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmadd_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16m8_tu(vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmadd_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m8_tu(vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmadd_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmadd_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32mf2_tu(vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmadd_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmadd_vf_f32m1_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vf_f32m1_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m1_tu(vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmadd_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmadd_vf_f32m2_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vf_f32m2_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m2_tu(vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmadd_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmadd_vf_f32m4_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vf_f32m4_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m4_tu(vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmadd_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmadd_vf_f32m8_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vf_f32m8_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m8_tu(vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmadd_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmadd_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmadd_vf_f64m1_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vf_f64m1_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m1_tu(vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmadd_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmadd_vf_f64m2_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vf_f64m2_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m2_tu(vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmadd_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmadd_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmadd_vf_f64m4_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vf_f64m4_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m4_tu(vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmadd_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmadd_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmadd_vf_f64m8_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vf_f64m8_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m8_tu(vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmadd_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf4_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmadd_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf4_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmadd_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmadd_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmadd_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m1_tum(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmadd_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m1_tum(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmadd_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m2_tum(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmadd_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmadd_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m4_tum(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmadd_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m4_tum(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmadd_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m8_tum(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmadd_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m8_tum(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmadd_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmadd_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmadd_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmadd_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m1_tum(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmadd_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmadd_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmadd_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmadd_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m4_tum(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmadd_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmadd_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m8_tum(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmadd_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmadd_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m1_tum(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmadd_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmadd_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m2_tum(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmadd_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmadd_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m4_tum(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmadd_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmadd_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m8_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmadd_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmadd_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmadd_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmadd_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmadd_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmadd_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmadd_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmadd_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmadd_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmadd_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmadd_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmadd_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmadd_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmadd_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmadd_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmadd_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmadd_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmadd_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmadd_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmadd_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmadd_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmadd_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmadd_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmadd_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmadd_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmadd_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmadd_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmadd_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmadd_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmadd_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmadd_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf4_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmadd_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf4_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmadd_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmadd_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmadd_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m1_mu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmadd_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m1_mu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmadd_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m2_mu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmadd_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmadd_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m4_mu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmadd_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m4_mu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmadd_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m8_mu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmadd_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m8_mu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmadd_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmadd_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmadd_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmadd_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m1_mu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmadd_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmadd_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmadd_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmadd_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m4_mu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmadd_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmadd_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m8_mu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmadd_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmadd_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m1_mu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmadd_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmadd_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m2_mu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmadd_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmadd_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m4_mu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmadd_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmadd_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m8_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmadd_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16mf4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmadd_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16mf4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmadd_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmadd_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmadd_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmadd_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmadd_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmadd_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmadd_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmadd_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmadd_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmadd_vv_f16m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmadd_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f16m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmadd_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmadd_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmadd_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmadd_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmadd_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmadd_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmadd_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmadd_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmadd_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmadd_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmadd_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmadd_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmadd_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmadd_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmadd_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmadd_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmadd_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmadd_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmadd_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmadd_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmadd_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmadd_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmadd_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f64m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmadd_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmadd_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmadd_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmadd_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmadd_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmadd_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmadd_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmadd_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmadd_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmadd_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmadd_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmadd_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmadd_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmadd_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmadd_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmadd_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmadd_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmadd_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmadd_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmadd_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmadd_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmadd_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmadd_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmadd_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmadd_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmadd_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmadd_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmadd_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmadd_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmadd_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmadd_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfmadd_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfmadd_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfmadd_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfmadd_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfmadd_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfmadd_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfmadd_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfmadd_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfmadd_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfmadd_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfmadd_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfmadd_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfmadd_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfmadd_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfmadd_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfmadd_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmadd_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmadd_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmadd_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmadd_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmadd_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmadd_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmadd_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmadd_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfmadd_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfmadd_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfmadd_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfmadd_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfmadd_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfmadd_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfmadd_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfmadd_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmadd_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmadd_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmadd_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmadd_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmadd_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmadd_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmadd_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmadd_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmadd_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmadd_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmadd_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmadd_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmadd_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmadd_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmadd_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmadd_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmadd_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmadd_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmadd_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmadd_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmadd_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmadd_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmadd_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmadd_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmadd_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmadd_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmadd_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmadd_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmadd_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmadd_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmadd_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f16m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmadd_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmadd_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f16m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmadd_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmadd_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmadd_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmadd_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmadd_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmadd_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmadd_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmadd_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmadd_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f32m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmadd_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmadd_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmadd_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmadd_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmadd_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmadd_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmadd_vf_f32m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmadd_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmadd_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmadd_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmadd_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmadd_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmadd_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmadd_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmadd_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmadd_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmadd_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmadd_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmadd_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmadd_vf_f64m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmax.c b/auto-generated/policy_funcs/llvm-api-tests/vfmax.c index 050592bfb..eb2c92fc1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmax.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmax.c @@ -6,482 +6,663 @@ #include -vfloat16mf4_t test_vfmax_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmax_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfmax_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmax_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmax_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmax_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmax_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfmax_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmax_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmax_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmax_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmax_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfmax_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmax_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmax_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmax_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmax_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfmax_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmax_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmax_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmax_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmax_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfmax_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmax_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmax_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmax_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmax_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfmax_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmax_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmax_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmax_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmax_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfmax_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmax_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmax_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfmax_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmax_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmax_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfmax_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmax_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmax_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfmax_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmax_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmax_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfmax_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmax_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmax_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfmax_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmax_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmax_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfmax_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmax_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmax_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfmax_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmax_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmax_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfmax_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmax_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmax_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfmax_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmax_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmax_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfmax_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmax_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmax_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfmax_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmax_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmax_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfmax_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmax_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmax_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfmax_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmax_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmax_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfmax_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmax_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmax_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfmax_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmax_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmax_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfmax_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmax_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmax_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfmax_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmax_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmax_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmax_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmax_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmax_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmax_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmax_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmax_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmax_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmax_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmax_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmax_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmax_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmax_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmax_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmax_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmax_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmax_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmax_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmax_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmax_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmax_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmax_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmax_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmax_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmax_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmax_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmax_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmax_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmax_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmax_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmax_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmax_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmax_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmax_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmax_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmax_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmax_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmax_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmax_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmax_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmax_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmax_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmax_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmax_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmax_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmax_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmax_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmax_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmax_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmax_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmax_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmax_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmax_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmax_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmax_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmax_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmax_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmax_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmax_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmax_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmax_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmax_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmax_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmax_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmax_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmax_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmax_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmax_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmax_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmax_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmax_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmax_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmax_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmax_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmax_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmax_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmax_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmax_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmax_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmax_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmax_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmax_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmax_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmax_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmax_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmax_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmax_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmax_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmax_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmax_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmax_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmax_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmax_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmax_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmax_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmax_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmax_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmax_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmax_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmax_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmax_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmax_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmax_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmax_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmax_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmax_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmax_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmax_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmax_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmax_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmax_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmax_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmax_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmax_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmax_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmax_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmax_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmax_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmax_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmax_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmax_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmax_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmax_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmax_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmax_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmax_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmax_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmax_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmax_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmax_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmax_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmax_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmax_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmax_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmax_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmax_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmax_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmax_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmax_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmax_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmax_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmax_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmax_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmax_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmax_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmax_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmax_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmax_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmax_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmax_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmax_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmax_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmax_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmax_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmax_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmax_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmax_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmax_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmax_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmax_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmax_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmax_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmax_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmax_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmax_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmax_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmax_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmax_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmax_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmax_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmax_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmax_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmax_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmax_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmax_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmax_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmax_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmax_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmax_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmax_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmax_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmax_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmax_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmax_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmax_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmax_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmerge.c b/auto-generated/policy_funcs/llvm-api-tests/vfmerge.c index c2340518c..f86d83a86 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmerge.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmerge.c @@ -6,62 +6,79 @@ #include -vfloat16mf4_t test_vfmerge_vfm_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, vbool64_t v0, size_t vl) { +vfloat16mf4_t test_vfmerge_vfm_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, vbool64_t v0, + size_t vl) { return __riscv_vfmerge_vfm_f16mf4_tu(vd, vs2, rs1, v0, vl); } -vfloat16mf2_t test_vfmerge_vfm_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, vbool32_t v0, size_t vl) { +vfloat16mf2_t test_vfmerge_vfm_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, vbool32_t v0, + size_t vl) { return __riscv_vfmerge_vfm_f16mf2_tu(vd, vs2, rs1, v0, vl); } -vfloat16m1_t test_vfmerge_vfm_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, vbool16_t v0, size_t vl) { +vfloat16m1_t test_vfmerge_vfm_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, vbool16_t v0, size_t vl) { return __riscv_vfmerge_vfm_f16m1_tu(vd, vs2, rs1, v0, vl); } -vfloat16m2_t test_vfmerge_vfm_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, vbool8_t v0, size_t vl) { +vfloat16m2_t test_vfmerge_vfm_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, vbool8_t v0, size_t vl) { return __riscv_vfmerge_vfm_f16m2_tu(vd, vs2, rs1, v0, vl); } -vfloat16m4_t test_vfmerge_vfm_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, vbool4_t v0, size_t vl) { +vfloat16m4_t test_vfmerge_vfm_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, vbool4_t v0, size_t vl) { return __riscv_vfmerge_vfm_f16m4_tu(vd, vs2, rs1, v0, vl); } -vfloat16m8_t test_vfmerge_vfm_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, vbool2_t v0, size_t vl) { +vfloat16m8_t test_vfmerge_vfm_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, vbool2_t v0, size_t vl) { return __riscv_vfmerge_vfm_f16m8_tu(vd, vs2, rs1, v0, vl); } -vfloat32mf2_t test_vfmerge_vfm_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, vbool64_t v0, size_t vl) { +vfloat32mf2_t test_vfmerge_vfm_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, vbool64_t v0, size_t vl) { return __riscv_vfmerge_vfm_f32mf2_tu(vd, vs2, rs1, v0, vl); } -vfloat32m1_t test_vfmerge_vfm_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, vbool32_t v0, size_t vl) { +vfloat32m1_t test_vfmerge_vfm_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, vbool32_t v0, size_t vl) { return __riscv_vfmerge_vfm_f32m1_tu(vd, vs2, rs1, v0, vl); } -vfloat32m2_t test_vfmerge_vfm_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, vbool16_t v0, size_t vl) { +vfloat32m2_t test_vfmerge_vfm_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, vbool16_t v0, size_t vl) { return __riscv_vfmerge_vfm_f32m2_tu(vd, vs2, rs1, v0, vl); } -vfloat32m4_t test_vfmerge_vfm_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, vbool8_t v0, size_t vl) { +vfloat32m4_t test_vfmerge_vfm_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, vbool8_t v0, size_t vl) { return __riscv_vfmerge_vfm_f32m4_tu(vd, vs2, rs1, v0, vl); } -vfloat32m8_t test_vfmerge_vfm_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, vbool4_t v0, size_t vl) { +vfloat32m8_t test_vfmerge_vfm_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, vbool4_t v0, size_t vl) { return __riscv_vfmerge_vfm_f32m8_tu(vd, vs2, rs1, v0, vl); } -vfloat64m1_t test_vfmerge_vfm_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, vbool64_t v0, size_t vl) { +vfloat64m1_t test_vfmerge_vfm_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, vbool64_t v0, size_t vl) { return __riscv_vfmerge_vfm_f64m1_tu(vd, vs2, rs1, v0, vl); } -vfloat64m2_t test_vfmerge_vfm_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, vbool32_t v0, size_t vl) { +vfloat64m2_t test_vfmerge_vfm_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, vbool32_t v0, size_t vl) { return __riscv_vfmerge_vfm_f64m2_tu(vd, vs2, rs1, v0, vl); } -vfloat64m4_t test_vfmerge_vfm_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, vbool16_t v0, size_t vl) { +vfloat64m4_t test_vfmerge_vfm_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, vbool16_t v0, size_t vl) { return __riscv_vfmerge_vfm_f64m4_tu(vd, vs2, rs1, v0, vl); } -vfloat64m8_t test_vfmerge_vfm_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, vbool8_t v0, size_t vl) { +vfloat64m8_t test_vfmerge_vfm_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, vbool8_t v0, size_t vl) { return __riscv_vfmerge_vfm_f64m8_tu(vd, vs2, rs1, v0, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmin.c b/auto-generated/policy_funcs/llvm-api-tests/vfmin.c index ca65682f4..860c77008 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmin.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmin.c @@ -6,482 +6,663 @@ #include -vfloat16mf4_t test_vfmin_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmin_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfmin_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmin_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmin_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmin_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmin_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfmin_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmin_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmin_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmin_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmin_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfmin_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmin_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmin_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmin_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmin_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfmin_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmin_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmin_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmin_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmin_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfmin_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmin_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmin_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmin_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmin_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfmin_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmin_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmin_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmin_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmin_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfmin_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmin_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmin_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfmin_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmin_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmin_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfmin_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmin_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmin_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfmin_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmin_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmin_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfmin_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmin_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmin_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfmin_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmin_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmin_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfmin_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmin_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmin_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfmin_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmin_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmin_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfmin_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmin_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmin_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfmin_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmin_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmin_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfmin_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmin_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmin_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfmin_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmin_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmin_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfmin_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmin_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmin_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfmin_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmin_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmin_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfmin_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmin_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmin_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfmin_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmin_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmin_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfmin_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmin_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmin_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfmin_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmin_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmin_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmin_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmin_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmin_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmin_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmin_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmin_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmin_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmin_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmin_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmin_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmin_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmin_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmin_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmin_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmin_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmin_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmin_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmin_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmin_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmin_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmin_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmin_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmin_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmin_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmin_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmin_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmin_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmin_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmin_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmin_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmin_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmin_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmin_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmin_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmin_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmin_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmin_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmin_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmin_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmin_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmin_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmin_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmin_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmin_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmin_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmin_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmin_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmin_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmin_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmin_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmin_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmin_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmin_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmin_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmin_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmin_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmin_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmin_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmin_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmin_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmin_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmin_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmin_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmin_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmin_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmin_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmin_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmin_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmin_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmin_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmin_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmin_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmin_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmin_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmin_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmin_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmin_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmin_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmin_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmin_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmin_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmin_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmin_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmin_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmin_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmin_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmin_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmin_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmin_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmin_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmin_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmin_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmin_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmin_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmin_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmin_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmin_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmin_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmin_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmin_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmin_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmin_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmin_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmin_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmin_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmin_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmin_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmin_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmin_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmin_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmin_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmin_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmin_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmin_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmin_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmin_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmin_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmin_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmin_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmin_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmin_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmin_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmin_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmin_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmin_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmin_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmin_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmin_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmin_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmin_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmin_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmin_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmin_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmin_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmin_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmin_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmin_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmin_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmin_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmin_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmin_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmin_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmin_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmin_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmin_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmin_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmin_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmin_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmin_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmin_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmin_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmin_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmin_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmin_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmin_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmin_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmin_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmin_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmin_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmin_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmin_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmin_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmin_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmin_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmin_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmin_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmin_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmin_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmin_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmin_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmin_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmin_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmin_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmin_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmin_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmin_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmin_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmin_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmin_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmin_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmin_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmin_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmin_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmin_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmin_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vfmsac.c index 72f117c72..ad2d0f6db 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmsac.c @@ -6,962 +6,1362 @@ #include -vfloat16mf4_t test_vfmsac_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16mf4_tu(vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmsac_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16mf4_tu(vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmsac_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16mf2_tu(vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmsac_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16mf2_tu(vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmsac_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16m1_tu(vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmsac_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m1_tu(vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmsac_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16m2_tu(vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmsac_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m2_tu(vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmsac_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16m4_tu(vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmsac_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m4_tu(vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmsac_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16m8_tu(vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmsac_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m8_tu(vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmsac_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmsac_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32mf2_tu(vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmsac_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmsac_vf_f32m1_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vf_f32m1_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m1_tu(vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmsac_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmsac_vf_f32m2_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vf_f32m2_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m2_tu(vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmsac_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmsac_vf_f32m4_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vf_f32m4_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m4_tu(vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmsac_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmsac_vf_f32m8_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vf_f32m8_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m8_tu(vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmsac_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsac_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmsac_vf_f64m1_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vf_f64m1_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m1_tu(vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmsac_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmsac_vf_f64m2_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vf_f64m2_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m2_tu(vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmsac_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsac_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmsac_vf_f64m4_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vf_f64m4_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m4_tu(vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmsac_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsac_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmsac_vf_f64m8_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vf_f64m8_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m8_tu(vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmsac_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf4_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmsac_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf4_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmsac_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmsac_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmsac_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m1_tum(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmsac_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m1_tum(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmsac_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m2_tum(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmsac_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmsac_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m4_tum(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmsac_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m4_tum(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmsac_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m8_tum(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmsac_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m8_tum(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmsac_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmsac_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmsac_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmsac_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m1_tum(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmsac_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmsac_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmsac_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmsac_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m4_tum(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmsac_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmsac_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m8_tum(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmsac_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmsac_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m1_tum(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmsac_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmsac_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m2_tum(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmsac_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmsac_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m4_tum(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmsac_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmsac_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m8_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmsac_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmsac_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmsac_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmsac_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmsac_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmsac_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmsac_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmsac_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmsac_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmsac_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmsac_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmsac_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmsac_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmsac_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmsac_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmsac_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmsac_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmsac_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmsac_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmsac_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmsac_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmsac_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmsac_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmsac_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmsac_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmsac_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmsac_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmsac_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmsac_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmsac_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmsac_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf4_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmsac_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf4_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmsac_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmsac_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmsac_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m1_mu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmsac_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m1_mu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmsac_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m2_mu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmsac_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmsac_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m4_mu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmsac_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m4_mu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmsac_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m8_mu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmsac_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m8_mu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmsac_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmsac_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmsac_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmsac_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m1_mu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmsac_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmsac_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmsac_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmsac_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m4_mu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmsac_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmsac_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m8_mu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmsac_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmsac_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m1_mu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmsac_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmsac_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m2_mu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmsac_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmsac_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m4_mu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmsac_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmsac_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m8_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmsac_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16mf4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsac_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16mf4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsac_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsac_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsac_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsac_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsac_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsac_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsac_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsac_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsac_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsac_vv_f16m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsac_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f16m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsac_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsac_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsac_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsac_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsac_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsac_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsac_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsac_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsac_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsac_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsac_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsac_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsac_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsac_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsac_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsac_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsac_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsac_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsac_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsac_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsac_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsac_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsac_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f64m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsac_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsac_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsac_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsac_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsac_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsac_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsac_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsac_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsac_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsac_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsac_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsac_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsac_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsac_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsac_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsac_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsac_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsac_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsac_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsac_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsac_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsac_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsac_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsac_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsac_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsac_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsac_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsac_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsac_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsac_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsac_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfmsac_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfmsac_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfmsac_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfmsac_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfmsac_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfmsac_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfmsac_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfmsac_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfmsac_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfmsac_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfmsac_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfmsac_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfmsac_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfmsac_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfmsac_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfmsac_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsac_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsac_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsac_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsac_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsac_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsac_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsac_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsac_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfmsac_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfmsac_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfmsac_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfmsac_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfmsac_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfmsac_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfmsac_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfmsac_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsac_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsac_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsac_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsac_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsac_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsac_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsac_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsac_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsac_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsac_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsac_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsac_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsac_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsac_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsac_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsac_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsac_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsac_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsac_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsac_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsac_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsac_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsac_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsac_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsac_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsac_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsac_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsac_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsac_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsac_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsac_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f16m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsac_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsac_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f16m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsac_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsac_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsac_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsac_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsac_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsac_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsac_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsac_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsac_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f32m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsac_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsac_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsac_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsac_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsac_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsac_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsac_vf_f32m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsac_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsac_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsac_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsac_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsac_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsac_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsac_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsac_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsac_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsac_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsac_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsac_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsac_vf_f64m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfmsub.c index b41d363ba..a2a0e463d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmsub.c @@ -6,962 +6,1362 @@ #include -vfloat16mf4_t test_vfmsub_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16mf4_tu(vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmsub_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16mf4_tu(vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmsub_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16mf2_tu(vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmsub_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16mf2_tu(vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmsub_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16m1_tu(vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmsub_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m1_tu(vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmsub_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16m2_tu(vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmsub_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m2_tu(vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmsub_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16m4_tu(vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmsub_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m4_tu(vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmsub_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16m8_tu(vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmsub_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m8_tu(vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmsub_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmsub_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32mf2_tu(vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmsub_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmsub_vf_f32m1_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vf_f32m1_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m1_tu(vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmsub_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmsub_vf_f32m2_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vf_f32m2_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m2_tu(vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmsub_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmsub_vf_f32m4_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vf_f32m4_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m4_tu(vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmsub_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmsub_vf_f32m8_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vf_f32m8_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m8_tu(vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmsub_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsub_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmsub_vf_f64m1_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vf_f64m1_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m1_tu(vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmsub_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmsub_vf_f64m2_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vf_f64m2_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m2_tu(vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmsub_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsub_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmsub_vf_f64m4_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vf_f64m4_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m4_tu(vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmsub_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsub_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmsub_vf_f64m8_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vf_f64m8_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m8_tu(vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmsub_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf4_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmsub_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf4_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmsub_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmsub_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmsub_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m1_tum(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmsub_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m1_tum(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmsub_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m2_tum(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmsub_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmsub_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m4_tum(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmsub_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m4_tum(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmsub_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m8_tum(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmsub_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m8_tum(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmsub_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmsub_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m1_tum(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmsub_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmsub_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m4_tum(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmsub_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m8_tum(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmsub_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m1_tum(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmsub_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m2_tum(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmsub_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m4_tum(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmsub_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m8_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmsub_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmsub_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmsub_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmsub_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmsub_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmsub_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmsub_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmsub_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmsub_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmsub_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmsub_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmsub_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmsub_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmsub_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmsub_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmsub_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmsub_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmsub_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmsub_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmsub_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmsub_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmsub_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf4_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfmsub_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf4_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfmsub_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfmsub_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfmsub_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m1_mu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfmsub_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m1_mu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfmsub_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m2_mu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfmsub_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfmsub_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m4_mu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfmsub_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m4_mu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfmsub_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m8_mu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfmsub_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m8_mu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfmsub_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfmsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfmsub_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfmsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m1_mu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfmsub_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfmsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfmsub_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfmsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m4_mu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfmsub_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfmsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m8_mu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfmsub_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfmsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m1_mu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfmsub_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfmsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m2_mu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfmsub_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfmsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m4_mu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfmsub_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfmsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m8_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfmsub_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16mf4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsub_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16mf4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsub_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsub_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsub_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsub_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsub_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsub_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsub_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsub_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsub_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsub_vv_f16m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsub_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f16m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsub_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsub_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsub_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsub_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsub_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsub_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsub_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsub_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsub_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsub_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsub_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsub_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsub_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsub_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsub_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsub_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsub_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsub_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsub_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsub_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsub_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsub_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f64m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsub_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsub_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsub_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsub_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsub_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsub_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsub_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsub_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsub_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsub_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsub_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsub_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsub_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsub_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsub_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsub_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsub_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsub_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsub_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsub_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsub_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsub_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfmsub_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfmsub_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfmsub_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfmsub_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfmsub_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfmsub_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfmsub_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfmsub_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfmsub_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfmsub_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfmsub_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfmsub_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfmsub_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfmsub_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfmsub_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfmsub_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsub_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsub_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsub_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsub_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsub_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsub_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsub_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsub_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfmsub_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfmsub_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfmsub_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfmsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfmsub_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfmsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfmsub_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfmsub_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsub_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsub_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsub_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsub_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsub_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsub_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsub_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsub_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmsub_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfmsub_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsub_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmsub_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfmsub_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsub_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmsub_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfmsub_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsub_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmsub_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfmsub_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsub_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmsub_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfmsub_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsub_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f16m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmsub_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfmsub_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f16m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsub_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfmsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsub_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfmsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsub_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfmsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f32m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsub_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfmsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsub_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfmsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfmsub_vf_f32m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsub_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfmsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsub_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfmsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsub_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfmsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsub_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfmsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfmsub_vf_f64m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmul.c b/auto-generated/policy_funcs/llvm-api-tests/vfmul.c index ab044a8c1..59c79f7b5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmul.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmul.c @@ -6,962 +6,1349 @@ #include -vfloat16mf4_t test_vfmul_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmul_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfmul_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmul_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmul_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmul_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmul_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfmul_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmul_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmul_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmul_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmul_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfmul_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmul_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmul_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmul_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmul_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfmul_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmul_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmul_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmul_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmul_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfmul_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmul_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmul_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmul_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmul_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfmul_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmul_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmul_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmul_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmul_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfmul_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmul_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmul_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmul_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmul_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfmul_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmul_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmul_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmul_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmul_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfmul_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmul_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmul_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmul_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmul_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfmul_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmul_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmul_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmul_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmul_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfmul_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmul_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmul_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmul_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmul_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfmul_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmul_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmul_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfmul_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmul_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmul_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfmul_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmul_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmul_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfmul_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmul_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmul_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfmul_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmul_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmul_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfmul_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmul_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmul_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfmul_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmul_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmul_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfmul_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmul_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmul_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmul_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmul_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmul_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmul_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmul_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmul_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmul_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmul_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmul_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmul_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmul_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmul_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmul_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmul_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmul_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmul_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmul_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmul_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmul_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmul_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmul_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmul_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmul_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmul_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmul_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmul_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmul_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmul_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmul_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmul_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmul_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmul_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmul_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmul_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmul_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmul_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmul_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmul_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmul_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmul_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmul_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmul_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmul_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmul_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmul_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmul_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmul_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmul_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmul_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmul_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmul_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmul_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmul_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmul_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmul_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmul_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmul_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmul_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmul_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmul_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmul_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmul_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmul_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmul_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmul_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmul_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmul_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmul_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmul_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmul_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmul_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmul_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmul_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmul_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmul_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmul_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmul_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmul_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmul_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmul_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmul_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmul_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmul_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmul_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmul_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmul_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmul_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmul_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmul_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmul_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmul_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmul_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmul_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmul_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmul_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmul_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmul_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmul_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmul_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmul_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmul_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmul_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmul_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmul_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmul_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmul_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmul_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmul_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmul_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmul_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmul_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmul_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmul_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmul_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmul_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmul_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmul_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmul_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmul_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmul_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfmul_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmul_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfmul_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmul_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfmul_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmul_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfmul_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmul_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfmul_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmul_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfmul_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmul_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfmul_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmul_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfmul_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmul_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfmul_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmul_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfmul_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmul_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfmul_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmul_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfmul_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmul_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfmul_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmul_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfmul_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmul_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfmul_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmul_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfmul_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmul_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfmul_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmul_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfmul_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmul_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfmul_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmul_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfmul_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmul_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfmul_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmul_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfmul_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmul_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfmul_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmul_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfmul_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmul_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfmul_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmul_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfmul_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmul_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfmul_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmul_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfmul_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmul_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfmul_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmul_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfmul_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfmul_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmul_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfmul_vv_f16mf4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmul_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmul_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16mf4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmul_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmul_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfmul_vv_f16mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmul_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmul_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmul_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmul_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfmul_vv_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmul_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmul_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmul_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmul_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfmul_vv_f16m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmul_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmul_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmul_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmul_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfmul_vv_f16m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmul_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmul_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmul_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmul_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfmul_vv_f16m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmul_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmul_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfmul_vf_f16m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmul_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmul_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfmul_vv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmul_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmul_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmul_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmul_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfmul_vv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmul_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmul_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmul_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmul_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfmul_vv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmul_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmul_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmul_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmul_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfmul_vv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmul_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmul_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmul_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmul_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfmul_vv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmul_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmul_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfmul_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmul_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmul_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfmul_vv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmul_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmul_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfmul_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmul_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmul_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfmul_vv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmul_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmul_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfmul_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmul_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmul_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfmul_vv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmul_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmul_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfmul_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmul_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmul_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfmul_vv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmul_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmul_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfmul_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmul_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmul_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmul_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmul_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmul_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmul_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmul_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmul_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmul_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmul_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmul_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmul_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmul_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmul_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmul_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmul_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmul_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmul_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmul_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmul_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmul_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmul_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmul_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmul_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmul_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmul_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmul_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmul_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmul_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmul_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmul_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmul_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmul_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmul_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmul_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmul_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmul_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmul_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmul_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmul_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmul_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmul_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmul_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmul_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmul_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmul_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmul_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmul_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmul_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmul_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmul_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmul_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmul_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmul_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmul_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmul_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmul_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmul_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmul_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmul_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmul_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmul_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmul_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmul_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmul_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmul_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmul_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmul_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmul_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmul_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmul_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmul_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmul_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmul_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmul_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmul_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmul_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmul_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmul_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmul_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmul_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmul_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmul_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmul_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmul_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmul_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmul_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmul_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmul_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmul_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmul_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmul_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmul_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmul_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmul_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmul_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmul_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmul_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmul_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmul_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmul_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmul_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmul_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmul_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmul_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmul_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmul_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmul_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmul_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmul_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmul_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmul_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmul_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmul_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmul_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmul_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmul_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmul_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmul_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmul_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmul_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfmul_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfmul_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmul_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmul_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfmul_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfmul_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmul_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmul_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfmul_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfmul_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfmul_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmul_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfmul_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfmul_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfmul_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmul_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfmul_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfmul_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfmul_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmul_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfmul_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f16m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfmul_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfmul_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfmul_vf_f16m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmul_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfmul_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfmul_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfmul_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfmul_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmul_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfmul_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfmul_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfmul_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmul_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfmul_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfmul_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfmul_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmul_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfmul_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfmul_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfmul_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmul_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfmul_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfmul_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfmul_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfmul_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmul_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfmul_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfmul_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfmul_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmul_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfmul_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfmul_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfmul_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmul_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfmul_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfmul_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfmul_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmul_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfmul_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfmul_vv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfmul_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfmul_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfmul_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmv.c b/auto-generated/policy_funcs/llvm-api-tests/vfmv.c index dedd6a4f5..ed8d7b31a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmv.c @@ -6,11 +6,13 @@ #include -vfloat16mf4_t test_vfmv_v_f_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmv_v_f_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + size_t vl) { return __riscv_vfmv_v_f_f16mf4_tu(vd, rs1, vl); } -vfloat16mf2_t test_vfmv_v_f_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmv_v_f_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + size_t vl) { return __riscv_vfmv_v_f_f16mf2_tu(vd, rs1, vl); } @@ -66,11 +68,13 @@ vfloat64m8_t test_vfmv_v_f_f64m8_tu(vfloat64m8_t vd, double rs1, size_t vl) { return __riscv_vfmv_v_f_f64m8_tu(vd, rs1, vl); } -vfloat16mf4_t test_vfmv_s_f_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfmv_s_f_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + size_t vl) { return __riscv_vfmv_s_f_f16mf4_tu(vd, rs1, vl); } -vfloat16mf2_t test_vfmv_s_f_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfmv_s_f_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + size_t vl) { return __riscv_vfmv_s_f_f16mf2_tu(vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfncvt.c b/auto-generated/policy_funcs/llvm-api-tests/vfncvt.c index 95f962411..8f557fa84 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfncvt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfncvt.c @@ -6,15 +6,18 @@ #include -vint8mf8_t test_vfncvt_x_f_w_i8mf8_tu(vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_x_f_w_i8mf8_tu(vint8mf8_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8mf8_tu(vd, vs2, vl); } -vint8mf4_t test_vfncvt_x_f_w_i8mf4_tu(vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_x_f_w_i8mf4_tu(vint8mf4_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8mf4_tu(vd, vs2, vl); } -vint8mf2_t test_vfncvt_x_f_w_i8mf2_tu(vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_x_f_w_i8mf2_tu(vint8mf2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8mf2_tu(vd, vs2, vl); } @@ -30,1802 +33,2252 @@ vint8m4_t test_vfncvt_x_f_w_i8m4_tu(vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m4_tu(vd, vs2, vl); } -vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_tu(vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_tu(vuint8mf8_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_tu(vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_tu(vuint8mf4_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_tu(vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_tu(vuint8mf2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf2_tu(vd, vs2, vl); } -vuint8m1_t test_vfncvt_xu_f_w_u8m1_tu(vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_xu_f_w_u8m1_tu(vuint8m1_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8m1_tu(vd, vs2, vl); } -vuint8m2_t test_vfncvt_xu_f_w_u8m2_tu(vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_xu_f_w_u8m2_tu(vuint8m2_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8m2_tu(vd, vs2, vl); } -vuint8m4_t test_vfncvt_xu_f_w_u8m4_tu(vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_xu_f_w_u8m4_tu(vuint8m4_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8m4_tu(vd, vs2, vl); } -vint16mf4_t test_vfncvt_x_f_w_i16mf4_tu(vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_x_f_w_i16mf4_tu(vint16mf4_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16mf4_tu(vd, vs2, vl); } -vint16mf2_t test_vfncvt_x_f_w_i16mf2_tu(vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_x_f_w_i16mf2_tu(vint16mf2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16mf2_tu(vd, vs2, vl); } -vint16m1_t test_vfncvt_x_f_w_i16m1_tu(vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_x_f_w_i16m1_tu(vint16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16m1_tu(vd, vs2, vl); } -vint16m2_t test_vfncvt_x_f_w_i16m2_tu(vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_x_f_w_i16m2_tu(vint16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16m2_tu(vd, vs2, vl); } -vint16m4_t test_vfncvt_x_f_w_i16m4_tu(vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_x_f_w_i16m4_tu(vint16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16m4_tu(vd, vs2, vl); } -vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_tu(vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_tu(vuint16mf4_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_tu(vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_tu(vuint16mf2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vfncvt_xu_f_w_u16m1_tu(vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_xu_f_w_u16m1_tu(vuint16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vfncvt_xu_f_w_u16m2_tu(vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_xu_f_w_u16m2_tu(vuint16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vfncvt_xu_f_w_u16m4_tu(vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_xu_f_w_u16m4_tu(vuint16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16m4_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_tu(vfloat16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_tu(vfloat16mf4_t vd, vint32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_tu(vfloat16mf2_t vd, vint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_tu(vfloat16mf2_t vd, vint32m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_x_w_f16m1_tu(vfloat16m1_t vd, vint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_x_w_f16m1_tu(vfloat16m1_t vd, vint32m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_x_w_f16m2_tu(vfloat16m2_t vd, vint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_x_w_f16m2_tu(vfloat16m2_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_x_w_f16m4_tu(vfloat16m4_t vd, vint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_x_w_f16m4_tu(vfloat16m4_t vd, vint32m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16m4_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_tu(vfloat16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_tu(vfloat16mf4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_tu(vfloat16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_tu(vfloat16mf2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_xu_w_f16m1_tu(vfloat16m1_t vd, vuint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_xu_w_f16m1_tu(vfloat16m1_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_xu_w_f16m2_tu(vfloat16m2_t vd, vuint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_xu_w_f16m2_tu(vfloat16m2_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_xu_w_f16m4_tu(vfloat16m4_t vd, vuint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_xu_w_f16m4_tu(vfloat16m4_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16m4_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_tu(vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_tu(vfloat16mf4_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_tu(vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_tu(vfloat16mf2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_f_w_f16m1_tu(vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_f_w_f16m1_tu(vfloat16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_f_w_f16m2_tu(vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_f_w_f16m2_tu(vfloat16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_f_w_f16m4_tu(vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_f_w_f16m4_tu(vfloat16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16m4_tu(vd, vs2, vl); } -vint32mf2_t test_vfncvt_x_f_w_i32mf2_tu(vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_x_f_w_i32mf2_tu(vint32mf2_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i32mf2_tu(vd, vs2, vl); } -vint32m1_t test_vfncvt_x_f_w_i32m1_tu(vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_x_f_w_i32m1_tu(vint32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i32m1_tu(vd, vs2, vl); } -vint32m2_t test_vfncvt_x_f_w_i32m2_tu(vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_x_f_w_i32m2_tu(vint32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i32m2_tu(vd, vs2, vl); } -vint32m4_t test_vfncvt_x_f_w_i32m4_tu(vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_x_f_w_i32m4_tu(vint32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i32m4_tu(vd, vs2, vl); } -vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_tu(vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_tu(vuint32mf2_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vfncvt_xu_f_w_u32m1_tu(vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_xu_f_w_u32m1_tu(vuint32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vfncvt_xu_f_w_u32m2_tu(vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_xu_f_w_u32m2_tu(vuint32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vfncvt_xu_f_w_u32m4_tu(vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_xu_f_w_u32m4_tu(vuint32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u32m4_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_tu(vfloat32mf2_t vd, vint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_tu(vfloat32mf2_t vd, vint64m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_x_w_f32m1_tu(vfloat32m1_t vd, vint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_x_w_f32m1_tu(vfloat32m1_t vd, vint64m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_x_w_f32m2_tu(vfloat32m2_t vd, vint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_x_w_f32m2_tu(vfloat32m2_t vd, vint64m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_x_w_f32m4_tu(vfloat32m4_t vd, vint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_x_w_f32m4_tu(vfloat32m4_t vd, vint64m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f32m4_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_tu(vfloat32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_tu(vfloat32mf2_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_xu_w_f32m1_tu(vfloat32m1_t vd, vuint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_xu_w_f32m1_tu(vfloat32m1_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_xu_w_f32m2_tu(vfloat32m2_t vd, vuint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_xu_w_f32m2_tu(vfloat32m2_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_xu_w_f32m4_tu(vfloat32m4_t vd, vuint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_xu_w_f32m4_tu(vfloat32m4_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f32m4_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_tu(vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_tu(vfloat32mf2_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_f_w_f32m1_tu(vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_f_w_f32m1_tu(vfloat32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_f_w_f32m2_tu(vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_f_w_f32m2_tu(vfloat32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_f_w_f32m4_tu(vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_f_w_f32m4_tu(vfloat32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f32m4_tu(vd, vs2, vl); } -vint8mf8_t test_vfncvt_x_f_w_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_x_f_w_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf8_tum(vm, vd, vs2, vl); } -vint8mf4_t test_vfncvt_x_f_w_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_x_f_w_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf4_tum(vm, vd, vs2, vl); } -vint8mf2_t test_vfncvt_x_f_w_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_x_f_w_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf2_tum(vm, vd, vs2, vl); } -vint8m1_t test_vfncvt_x_f_w_i8m1_tum(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_x_f_w_i8m1_tum(vbool8_t vm, vint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m1_tum(vm, vd, vs2, vl); } -vint8m2_t test_vfncvt_x_f_w_i8m2_tum(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_x_f_w_i8m2_tum(vbool4_t vm, vint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m2_tum(vm, vd, vs2, vl); } -vint8m4_t test_vfncvt_x_f_w_i8m4_tum(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_x_f_w_i8m4_tum(vbool2_t vm, vint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m4_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vfncvt_xu_f_w_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_xu_f_w_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vfncvt_xu_f_w_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_xu_f_w_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vfncvt_xu_f_w_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_xu_f_w_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m4_tum(vm, vd, vs2, vl); } -vint16mf4_t test_vfncvt_x_f_w_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_x_f_w_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf4_tum(vm, vd, vs2, vl); } -vint16mf2_t test_vfncvt_x_f_w_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_x_f_w_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf2_tum(vm, vd, vs2, vl); } -vint16m1_t test_vfncvt_x_f_w_i16m1_tum(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_x_f_w_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m1_tum(vm, vd, vs2, vl); } -vint16m2_t test_vfncvt_x_f_w_i16m2_tum(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_x_f_w_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m2_tum(vm, vd, vs2, vl); } -vint16m4_t test_vfncvt_x_f_w_i16m4_tum(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_x_f_w_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m4_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vfncvt_xu_f_w_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_xu_f_w_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vfncvt_xu_f_w_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_xu_f_w_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vfncvt_xu_f_w_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_xu_f_w_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m4_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_x_w_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_x_w_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_x_w_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_x_w_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_x_w_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_x_w_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m4_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_xu_w_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vuint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_xu_w_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_xu_w_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vuint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_xu_w_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_xu_w_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vuint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_xu_w_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m4_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_f_w_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_f_w_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_f_w_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_f_w_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_f_w_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_f_w_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m4_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vfncvt_x_f_w_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_x_f_w_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vfncvt_x_f_w_i32m1_tum(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_x_f_w_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vfncvt_x_f_w_i32m2_tum(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_x_f_w_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vfncvt_x_f_w_i32m4_tum(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_x_f_w_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m4_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vfncvt_xu_f_w_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_xu_f_w_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vfncvt_xu_f_w_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_xu_f_w_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vfncvt_xu_f_w_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_xu_f_w_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m4_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_x_w_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_x_w_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_x_w_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_x_w_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_x_w_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_x_w_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m4_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_xu_w_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vuint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_xu_w_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_xu_w_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vuint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_xu_w_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_xu_w_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vuint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_xu_w_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m4_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_f_w_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_f_w_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_f_w_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_f_w_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_f_w_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_f_w_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m4_tum(vm, vd, vs2, vl); } -vint8mf8_t test_vfncvt_x_f_w_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_x_f_w_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf8_tumu(vm, vd, vs2, vl); } -vint8mf4_t test_vfncvt_x_f_w_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_x_f_w_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf4_tumu(vm, vd, vs2, vl); } -vint8mf2_t test_vfncvt_x_f_w_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_x_f_w_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf2_tumu(vm, vd, vs2, vl); } -vint8m1_t test_vfncvt_x_f_w_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_x_f_w_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m1_tumu(vm, vd, vs2, vl); } -vint8m2_t test_vfncvt_x_f_w_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_x_f_w_i8m2_tumu(vbool4_t vm, vint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m2_tumu(vm, vd, vs2, vl); } -vint8m4_t test_vfncvt_x_f_w_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_x_f_w_i8m4_tumu(vbool2_t vm, vint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m4_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vfncvt_xu_f_w_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_xu_f_w_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vfncvt_xu_f_w_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_xu_f_w_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vfncvt_xu_f_w_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_xu_f_w_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m4_tumu(vm, vd, vs2, vl); } -vint16mf4_t test_vfncvt_x_f_w_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_x_f_w_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf4_tumu(vm, vd, vs2, vl); } -vint16mf2_t test_vfncvt_x_f_w_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_x_f_w_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf2_tumu(vm, vd, vs2, vl); } -vint16m1_t test_vfncvt_x_f_w_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_x_f_w_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m1_tumu(vm, vd, vs2, vl); } -vint16m2_t test_vfncvt_x_f_w_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_x_f_w_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m2_tumu(vm, vd, vs2, vl); } -vint16m4_t test_vfncvt_x_f_w_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_x_f_w_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m4_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vfncvt_xu_f_w_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_xu_f_w_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vfncvt_xu_f_w_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_xu_f_w_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vfncvt_xu_f_w_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_xu_f_w_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m4_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_x_w_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_x_w_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_x_w_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_x_w_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_x_w_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_x_w_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_xu_w_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vuint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_xu_w_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_xu_w_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vuint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_xu_w_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_xu_w_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vuint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_xu_w_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_f_w_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_f_w_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_f_w_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_f_w_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_f_w_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_f_w_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m4_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vfncvt_x_f_w_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_x_f_w_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vfncvt_x_f_w_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_x_f_w_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vfncvt_x_f_w_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_x_f_w_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vfncvt_x_f_w_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_x_f_w_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m4_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vfncvt_xu_f_w_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_xu_f_w_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vfncvt_xu_f_w_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_xu_f_w_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vfncvt_xu_f_w_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_xu_f_w_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m4_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_x_w_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_x_w_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_x_w_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_x_w_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_x_w_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_x_w_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_xu_w_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vuint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_xu_w_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_xu_w_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vuint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_xu_w_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_xu_w_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vuint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_xu_w_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_f_w_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_f_w_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_f_w_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_f_w_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_f_w_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_f_w_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m4_tumu(vm, vd, vs2, vl); } -vint8mf8_t test_vfncvt_x_f_w_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_x_f_w_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf8_mu(vm, vd, vs2, vl); } -vint8mf4_t test_vfncvt_x_f_w_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_x_f_w_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf4_mu(vm, vd, vs2, vl); } -vint8mf2_t test_vfncvt_x_f_w_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_x_f_w_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf2_mu(vm, vd, vs2, vl); } -vint8m1_t test_vfncvt_x_f_w_i8m1_mu(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_x_f_w_i8m1_mu(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8m1_mu(vm, vd, vs2, vl); } -vint8m2_t test_vfncvt_x_f_w_i8m2_mu(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_x_f_w_i8m2_mu(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8m2_mu(vm, vd, vs2, vl); } -vint8m4_t test_vfncvt_x_f_w_i8m4_mu(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_x_f_w_i8m4_mu(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8m4_mu(vm, vd, vs2, vl); } -vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vfncvt_xu_f_w_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_xu_f_w_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vfncvt_xu_f_w_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_xu_f_w_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vfncvt_xu_f_w_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_xu_f_w_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m4_mu(vm, vd, vs2, vl); } -vint16mf4_t test_vfncvt_x_f_w_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_x_f_w_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf4_mu(vm, vd, vs2, vl); } -vint16mf2_t test_vfncvt_x_f_w_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_x_f_w_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf2_mu(vm, vd, vs2, vl); } -vint16m1_t test_vfncvt_x_f_w_i16m1_mu(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_x_f_w_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m1_mu(vm, vd, vs2, vl); } -vint16m2_t test_vfncvt_x_f_w_i16m2_mu(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_x_f_w_i16m2_mu(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m2_mu(vm, vd, vs2, vl); } -vint16m4_t test_vfncvt_x_f_w_i16m4_mu(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_x_f_w_i16m4_mu(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m4_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vfncvt_xu_f_w_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_xu_f_w_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vfncvt_xu_f_w_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_xu_f_w_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vfncvt_xu_f_w_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_xu_f_w_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m4_mu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_x_w_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_x_w_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_x_w_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_x_w_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_x_w_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_x_w_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m4_mu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_xu_w_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vuint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_xu_w_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_xu_w_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vuint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_xu_w_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_xu_w_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vuint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_xu_w_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m4_mu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_f_f_w_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_f_w_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_f_f_w_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_f_w_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_f_f_w_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_f_w_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m4_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vfncvt_x_f_w_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_x_f_w_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vfncvt_x_f_w_i32m1_mu(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_x_f_w_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vfncvt_x_f_w_i32m2_mu(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_x_f_w_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vfncvt_x_f_w_i32m4_mu(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_x_f_w_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m4_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vfncvt_xu_f_w_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_xu_f_w_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vfncvt_xu_f_w_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_xu_f_w_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vfncvt_xu_f_w_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_xu_f_w_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m4_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_x_w_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_x_w_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_x_w_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_x_w_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_x_w_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_x_w_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m4_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_xu_w_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vuint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_xu_w_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_xu_w_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vuint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_xu_w_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_xu_w_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vuint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_xu_w_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m4_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_f_f_w_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_f_w_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_f_f_w_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_f_w_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_f_f_w_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_f_w_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m4_mu(vm, vd, vs2, vl); } -vint8mf8_t test_vfncvt_x_f_w_i8mf8_rm_tu(vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_x_f_w_i8mf8_rm_tu(vint8mf8_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8mf8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf4_t test_vfncvt_x_f_w_i8mf4_rm_tu(vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_x_f_w_i8mf4_rm_tu(vint8mf4_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf2_t test_vfncvt_x_f_w_i8mf2_rm_tu(vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_x_f_w_i8mf2_rm_tu(vint8mf2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m1_t test_vfncvt_x_f_w_i8m1_rm_tu(vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_x_f_w_i8m1_rm_tu(vint8m1_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m2_t test_vfncvt_x_f_w_i8m2_rm_tu(vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_x_f_w_i8m2_rm_tu(vint8m2_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m4_t test_vfncvt_x_f_w_i8m4_rm_tu(vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_x_f_w_i8m4_rm_tu(vint8m4_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i8m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_rm_tu(vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_rm_tu(vuint8mf8_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_rm_tu(vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_rm_tu(vuint8mf4_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_rm_tu(vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_rm_tu(vuint8mf2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m1_t test_vfncvt_xu_f_w_u8m1_rm_tu(vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_xu_f_w_u8m1_rm_tu(vuint8m1_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m2_t test_vfncvt_xu_f_w_u8m2_rm_tu(vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_xu_f_w_u8m2_rm_tu(vuint8m2_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m4_t test_vfncvt_xu_f_w_u8m4_rm_tu(vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_xu_f_w_u8m4_rm_tu(vuint8m4_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u8m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf4_t test_vfncvt_x_f_w_i16mf4_rm_tu(vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_x_f_w_i16mf4_rm_tu(vint16mf4_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf2_t test_vfncvt_x_f_w_i16mf2_rm_tu(vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_x_f_w_i16mf2_rm_tu(vint16mf2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m1_t test_vfncvt_x_f_w_i16m1_rm_tu(vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_x_f_w_i16m1_rm_tu(vint16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m2_t test_vfncvt_x_f_w_i16m2_rm_tu(vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_x_f_w_i16m2_rm_tu(vint16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m4_t test_vfncvt_x_f_w_i16m4_rm_tu(vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_x_f_w_i16m4_rm_tu(vint16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_rm_tu(vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_rm_tu(vuint16mf4_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_rm_tu(vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_rm_tu(vuint16mf2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m1_t test_vfncvt_xu_f_w_u16m1_rm_tu(vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_xu_f_w_u16m1_rm_tu(vuint16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m2_t test_vfncvt_xu_f_w_u16m2_rm_tu(vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_xu_f_w_u16m2_rm_tu(vuint16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m4_t test_vfncvt_xu_f_w_u16m4_rm_tu(vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_xu_f_w_u16m4_rm_tu(vuint16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_rm_tu(vfloat16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_rm_tu(vfloat16mf4_t vd, vint32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_rm_tu(vfloat16mf2_t vd, vint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_rm_tu(vfloat16mf2_t vd, vint32m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_x_w_f16m1_rm_tu(vfloat16m1_t vd, vint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_x_w_f16m1_rm_tu(vfloat16m1_t vd, vint32m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_x_w_f16m2_rm_tu(vfloat16m2_t vd, vint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_x_w_f16m2_rm_tu(vfloat16m2_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_x_w_f16m4_rm_tu(vfloat16m4_t vd, vint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_x_w_f16m4_rm_tu(vfloat16m4_t vd, vint32m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_rm_tu(vfloat16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_rm_tu(vfloat16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_rm_tu(vfloat16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_rm_tu(vfloat16mf2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_xu_w_f16m1_rm_tu(vfloat16m1_t vd, vuint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_xu_w_f16m1_rm_tu(vfloat16m1_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_xu_w_f16m2_rm_tu(vfloat16m2_t vd, vuint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_xu_w_f16m2_rm_tu(vfloat16m2_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_xu_w_f16m4_rm_tu(vfloat16m4_t vd, vuint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_xu_w_f16m4_rm_tu(vfloat16m4_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_rm_tu(vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_f_w_f16m1_rm_tu(vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_f_w_f16m1_rm_tu(vfloat16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_f_w_f16m2_rm_tu(vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_f_w_f16m2_rm_tu(vfloat16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_f_w_f16m4_rm_tu(vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_f_w_f16m4_rm_tu(vfloat16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfncvt_x_f_w_i32mf2_rm_tu(vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_x_f_w_i32mf2_rm_tu(vint32mf2_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfncvt_x_f_w_i32m1_rm_tu(vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_x_f_w_i32m1_rm_tu(vint32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfncvt_x_f_w_i32m2_rm_tu(vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_x_f_w_i32m2_rm_tu(vint32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfncvt_x_f_w_i32m4_rm_tu(vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_x_f_w_i32m4_rm_tu(vint32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_x_f_w_i32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_rm_tu(vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_rm_tu(vuint32mf2_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfncvt_xu_f_w_u32m1_rm_tu(vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_xu_f_w_u32m1_rm_tu(vuint32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfncvt_xu_f_w_u32m2_rm_tu(vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_xu_f_w_u32m2_rm_tu(vuint32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfncvt_xu_f_w_u32m4_rm_tu(vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_xu_f_w_u32m4_rm_tu(vuint32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_xu_f_w_u32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_rm_tu(vfloat32mf2_t vd, vint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_rm_tu(vfloat32mf2_t vd, vint64m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_x_w_f32m1_rm_tu(vfloat32m1_t vd, vint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_x_w_f32m1_rm_tu(vfloat32m1_t vd, vint64m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_x_w_f32m2_rm_tu(vfloat32m2_t vd, vint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_x_w_f32m2_rm_tu(vfloat32m2_t vd, vint64m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_x_w_f32m4_rm_tu(vfloat32m4_t vd, vint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_x_w_f32m4_rm_tu(vfloat32m4_t vd, vint64m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_x_w_f32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_rm_tu(vfloat32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_rm_tu(vfloat32mf2_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_xu_w_f32m1_rm_tu(vfloat32m1_t vd, vuint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_xu_w_f32m1_rm_tu(vfloat32m1_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_xu_w_f32m2_rm_tu(vfloat32m2_t vd, vuint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_xu_w_f32m2_rm_tu(vfloat32m2_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_xu_w_f32m4_rm_tu(vfloat32m4_t vd, vuint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_xu_w_f32m4_rm_tu(vfloat32m4_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_xu_w_f32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_f_w_f32m1_rm_tu(vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_f_w_f32m1_rm_tu(vfloat32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_f_w_f32m2_rm_tu(vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_f_w_f32m2_rm_tu(vfloat32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_f_w_f32m4_rm_tu(vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_f_w_f32m4_rm_tu(vfloat32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_f_f_w_f32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf8_t test_vfncvt_x_f_w_i8mf8_rm_tum(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_x_f_w_i8mf8_rm_tum(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf4_t test_vfncvt_x_f_w_i8mf4_rm_tum(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_x_f_w_i8mf4_rm_tum(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf2_t test_vfncvt_x_f_w_i8mf2_rm_tum(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_x_f_w_i8mf2_rm_tum(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m1_t test_vfncvt_x_f_w_i8m1_rm_tum(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_x_f_w_i8m1_rm_tum(vbool8_t vm, vint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m2_t test_vfncvt_x_f_w_i8m2_rm_tum(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_x_f_w_i8m2_rm_tum(vbool4_t vm, vint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m4_t test_vfncvt_x_f_w_i8m4_rm_tum(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_x_f_w_i8m4_rm_tum(vbool2_t vm, vint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_rm_tum(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_rm_tum(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_rm_tum(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_rm_tum(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_rm_tum(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_rm_tum(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m1_t test_vfncvt_xu_f_w_u8m1_rm_tum(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_xu_f_w_u8m1_rm_tum(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m2_t test_vfncvt_xu_f_w_u8m2_rm_tum(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_xu_f_w_u8m2_rm_tum(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m4_t test_vfncvt_xu_f_w_u8m4_rm_tum(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_xu_f_w_u8m4_rm_tum(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf4_t test_vfncvt_x_f_w_i16mf4_rm_tum(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_x_f_w_i16mf4_rm_tum(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf2_t test_vfncvt_x_f_w_i16mf2_rm_tum(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_x_f_w_i16mf2_rm_tum(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m1_t test_vfncvt_x_f_w_i16m1_rm_tum(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_x_f_w_i16m1_rm_tum(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m2_t test_vfncvt_x_f_w_i16m2_rm_tum(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_x_f_w_i16m2_rm_tum(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m4_t test_vfncvt_x_f_w_i16m4_rm_tum(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_x_f_w_i16m4_rm_tum(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_rm_tum(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_rm_tum(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_rm_tum(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_rm_tum(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m1_t test_vfncvt_xu_f_w_u16m1_rm_tum(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_xu_f_w_u16m1_rm_tum(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m2_t test_vfncvt_xu_f_w_u16m2_rm_tum(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_xu_f_w_u16m2_rm_tum(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m4_t test_vfncvt_xu_f_w_u16m4_rm_tum(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_xu_f_w_u16m4_rm_tum(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_x_w_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_x_w_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_x_w_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_x_w_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_x_w_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_x_w_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_xu_w_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vuint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_xu_w_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_xu_w_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vuint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_xu_w_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_xu_w_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vuint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_xu_w_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_f_w_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_f_w_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_f_w_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_f_w_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_f_w_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_f_w_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfncvt_x_f_w_i32mf2_rm_tum(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_x_f_w_i32mf2_rm_tum(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfncvt_x_f_w_i32m1_rm_tum(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_x_f_w_i32m1_rm_tum(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfncvt_x_f_w_i32m2_rm_tum(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_x_f_w_i32m2_rm_tum(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfncvt_x_f_w_i32m4_rm_tum(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_x_f_w_i32m4_rm_tum(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_rm_tum(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_rm_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfncvt_xu_f_w_u32m1_rm_tum(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_xu_f_w_u32m1_rm_tum(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfncvt_xu_f_w_u32m2_rm_tum(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_xu_f_w_u32m2_rm_tum(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfncvt_xu_f_w_u32m4_rm_tum(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_xu_f_w_u32m4_rm_tum(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_x_w_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_x_w_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_x_w_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_x_w_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_x_w_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_x_w_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_xu_w_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vuint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_xu_w_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_xu_w_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vuint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_xu_w_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_xu_w_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vuint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_xu_w_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_f_w_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_f_w_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_f_w_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_f_w_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_f_w_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_f_w_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf8_t test_vfncvt_x_f_w_i8mf8_rm_tumu(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_x_f_w_i8mf8_rm_tumu(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf4_t test_vfncvt_x_f_w_i8mf4_rm_tumu(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_x_f_w_i8mf4_rm_tumu(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf2_t test_vfncvt_x_f_w_i8mf2_rm_tumu(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_x_f_w_i8mf2_rm_tumu(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m1_t test_vfncvt_x_f_w_i8m1_rm_tumu(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_x_f_w_i8m1_rm_tumu(vbool8_t vm, vint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m2_t test_vfncvt_x_f_w_i8m2_rm_tumu(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_x_f_w_i8m2_rm_tumu(vbool4_t vm, vint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m4_t test_vfncvt_x_f_w_i8m4_rm_tumu(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_x_f_w_i8m4_rm_tumu(vbool2_t vm, vint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_rm_tumu(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_rm_tumu(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_rm_tumu(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_rm_tumu(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_rm_tumu(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_rm_tumu(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m1_t test_vfncvt_xu_f_w_u8m1_rm_tumu(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_xu_f_w_u8m1_rm_tumu(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m2_t test_vfncvt_xu_f_w_u8m2_rm_tumu(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_xu_f_w_u8m2_rm_tumu(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m4_t test_vfncvt_xu_f_w_u8m4_rm_tumu(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_xu_f_w_u8m4_rm_tumu(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf4_t test_vfncvt_x_f_w_i16mf4_rm_tumu(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_x_f_w_i16mf4_rm_tumu(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf2_t test_vfncvt_x_f_w_i16mf2_rm_tumu(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_x_f_w_i16mf2_rm_tumu(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m1_t test_vfncvt_x_f_w_i16m1_rm_tumu(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_x_f_w_i16m1_rm_tumu(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m2_t test_vfncvt_x_f_w_i16m2_rm_tumu(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_x_f_w_i16m2_rm_tumu(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m4_t test_vfncvt_x_f_w_i16m4_rm_tumu(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_x_f_w_i16m4_rm_tumu(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_rm_tumu(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_rm_tumu(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_rm_tumu(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_rm_tumu(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m1_t test_vfncvt_xu_f_w_u16m1_rm_tumu(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_xu_f_w_u16m1_rm_tumu(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m2_t test_vfncvt_xu_f_w_u16m2_rm_tumu(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_xu_f_w_u16m2_rm_tumu(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m4_t test_vfncvt_xu_f_w_u16m4_rm_tumu(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_xu_f_w_u16m4_rm_tumu(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_x_w_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_x_w_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_x_w_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_x_w_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_x_w_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_x_w_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_xu_w_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vuint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_xu_w_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_xu_w_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vuint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_xu_w_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_xu_w_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vuint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_xu_w_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_f_w_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_f_w_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_f_w_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_f_w_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_f_w_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_f_w_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfncvt_x_f_w_i32mf2_rm_tumu(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_x_f_w_i32mf2_rm_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfncvt_x_f_w_i32m1_rm_tumu(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_x_f_w_i32m1_rm_tumu(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfncvt_x_f_w_i32m2_rm_tumu(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_x_f_w_i32m2_rm_tumu(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfncvt_x_f_w_i32m4_rm_tumu(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_x_f_w_i32m4_rm_tumu(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_rm_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_rm_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfncvt_xu_f_w_u32m1_rm_tumu(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_xu_f_w_u32m1_rm_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfncvt_xu_f_w_u32m2_rm_tumu(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_xu_f_w_u32m2_rm_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfncvt_xu_f_w_u32m4_rm_tumu(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_xu_f_w_u32m4_rm_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_x_w_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_x_w_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_x_w_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_x_w_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_x_w_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_x_w_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_xu_w_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vuint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_xu_w_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_xu_w_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vuint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_xu_w_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_xu_w_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vuint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_xu_w_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_f_w_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_f_w_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_f_w_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_f_w_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_f_w_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_f_w_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf8_t test_vfncvt_x_f_w_i8mf8_rm_mu(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_x_f_w_i8mf8_rm_mu(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf4_t test_vfncvt_x_f_w_i8mf4_rm_mu(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_x_f_w_i8mf4_rm_mu(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8mf2_t test_vfncvt_x_f_w_i8mf2_rm_mu(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_x_f_w_i8mf2_rm_mu(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m1_t test_vfncvt_x_f_w_i8m1_rm_mu(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_x_f_w_i8m1_rm_mu(vbool8_t vm, vint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m2_t test_vfncvt_x_f_w_i8m2_rm_mu(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_x_f_w_i8m2_rm_mu(vbool4_t vm, vint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint8m4_t test_vfncvt_x_f_w_i8m4_rm_mu(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_x_f_w_i8m4_rm_mu(vbool2_t vm, vint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i8m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_rm_mu(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_xu_f_w_u8mf8_rm_mu(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_rm_mu(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_xu_f_w_u8mf4_rm_mu(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_rm_mu(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_xu_f_w_u8mf2_rm_mu(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m1_t test_vfncvt_xu_f_w_u8m1_rm_mu(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_xu_f_w_u8m1_rm_mu(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m2_t test_vfncvt_xu_f_w_u8m2_rm_mu(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_xu_f_w_u8m2_rm_mu(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint8m4_t test_vfncvt_xu_f_w_u8m4_rm_mu(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_xu_f_w_u8m4_rm_mu(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u8m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf4_t test_vfncvt_x_f_w_i16mf4_rm_mu(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_x_f_w_i16mf4_rm_mu(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16mf2_t test_vfncvt_x_f_w_i16mf2_rm_mu(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_x_f_w_i16mf2_rm_mu(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m1_t test_vfncvt_x_f_w_i16m1_rm_mu(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_x_f_w_i16m1_rm_mu(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m2_t test_vfncvt_x_f_w_i16m2_rm_mu(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_x_f_w_i16m2_rm_mu(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint16m4_t test_vfncvt_x_f_w_i16m4_rm_mu(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_x_f_w_i16m4_rm_mu(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_rm_mu(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_xu_f_w_u16mf4_rm_mu(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_rm_mu(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_xu_f_w_u16mf2_rm_mu(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m1_t test_vfncvt_xu_f_w_u16m1_rm_mu(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_xu_f_w_u16m1_rm_mu(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m2_t test_vfncvt_xu_f_w_u16m2_rm_mu(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_xu_f_w_u16m2_rm_mu(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint16m4_t test_vfncvt_xu_f_w_u16m4_rm_mu(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_xu_f_w_u16m4_rm_mu(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_x_w_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_x_w_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_x_w_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_x_w_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_x_w_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_x_w_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_x_w_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_x_w_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_xu_w_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_xu_w_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_xu_w_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vuint32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_xu_w_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_xu_w_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vuint32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_xu_w_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_xu_w_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vuint32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_xu_w_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_f_f_w_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_f_f_w_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfncvt_f_f_w_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_f_f_w_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfncvt_f_f_w_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_f_f_w_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfncvt_f_f_w_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_f_f_w_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfncvt_x_f_w_i32mf2_rm_mu(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_x_f_w_i32mf2_rm_mu(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfncvt_x_f_w_i32m1_rm_mu(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_x_f_w_i32m1_rm_mu(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfncvt_x_f_w_i32m2_rm_mu(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_x_f_w_i32m2_rm_mu(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfncvt_x_f_w_i32m4_rm_mu(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_x_f_w_i32m4_rm_mu(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_x_f_w_i32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_rm_mu(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_xu_f_w_u32mf2_rm_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfncvt_xu_f_w_u32m1_rm_mu(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_xu_f_w_u32m1_rm_mu(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfncvt_xu_f_w_u32m2_rm_mu(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_xu_f_w_u32m2_rm_mu(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfncvt_xu_f_w_u32m4_rm_mu(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_xu_f_w_u32m4_rm_mu(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_xu_f_w_u32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_x_w_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_x_w_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_x_w_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_x_w_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_x_w_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_x_w_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_x_w_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_x_w_f32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_xu_w_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_xu_w_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vuint64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_xu_w_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_xu_w_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vuint64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_xu_w_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_xu_w_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vuint64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_xu_w_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_xu_w_f32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_f_f_w_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfncvt_f_f_w_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_f_f_w_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfncvt_f_f_w_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_f_f_w_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfncvt_f_f_w_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_f_f_w_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_f_f_w_f32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c b/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c index 2425f766c..d0a4d264a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c @@ -6,146 +6,182 @@ #include -vfloat16mf4_t test_vfncvt_rod_f_f_w_f16mf4_tu(vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_rod_f_f_w_f16mf4_tu(vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_rod_f_f_w_f16mf2_tu(vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_rod_f_f_w_f16mf2_tu(vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfncvt_rod_f_f_w_f16m1_tu(vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_rod_f_f_w_f16m1_tu(vfloat16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfncvt_rod_f_f_w_f16m2_tu(vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_rod_f_f_w_f16m2_tu(vfloat16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfncvt_rod_f_f_w_f16m4_tu(vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_rod_f_f_w_f16m4_tu(vfloat16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m4_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_rod_f_f_w_f32mf2_tu(vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_rod_f_f_w_f32mf2_tu(vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfncvt_rod_f_f_w_f32m1_tu(vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_rod_f_f_w_f32m1_tu(vfloat32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfncvt_rod_f_f_w_f32m2_tu(vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_rod_f_f_w_f32m2_tu(vfloat32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfncvt_rod_f_f_w_f32m4_tu(vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_rod_f_f_w_f32m4_tu(vfloat32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m4_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_rod_f_f_w_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_rod_f_f_w_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_rod_f_f_w_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_rod_f_f_w_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_rod_f_f_w_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_rod_f_f_w_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_rod_f_f_w_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_rod_f_f_w_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_rod_f_f_w_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_rod_f_f_w_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m4_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_rod_f_f_w_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_rod_f_f_w_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_rod_f_f_w_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_rod_f_f_w_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_rod_f_f_w_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_rod_f_f_w_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_rod_f_f_w_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_rod_f_f_w_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m4_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_rod_f_f_w_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_rod_f_f_w_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_rod_f_f_w_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_rod_f_f_w_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_rod_f_f_w_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_rod_f_f_w_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_rod_f_f_w_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_rod_f_f_w_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_rod_f_f_w_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_rod_f_f_w_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m4_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_rod_f_f_w_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_rod_f_f_w_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_rod_f_f_w_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_rod_f_f_w_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_rod_f_f_w_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_rod_f_f_w_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_rod_f_f_w_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_rod_f_f_w_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m4_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfncvt_rod_f_f_w_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat16mf4_t test_vfncvt_rod_f_f_w_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfncvt_rod_f_f_w_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat16mf2_t test_vfncvt_rod_f_f_w_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfncvt_rod_f_f_w_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat16m1_t test_vfncvt_rod_f_f_w_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfncvt_rod_f_f_w_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat16m2_t test_vfncvt_rod_f_f_w_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfncvt_rod_f_f_w_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat16m4_t test_vfncvt_rod_f_f_w_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f16m4_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfncvt_rod_f_f_w_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat32mf2_t test_vfncvt_rod_f_f_w_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfncvt_rod_f_f_w_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat32m1_t test_vfncvt_rod_f_f_w_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfncvt_rod_f_f_w_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat32m2_t test_vfncvt_rod_f_f_w_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfncvt_rod_f_f_w_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat32m4_t test_vfncvt_rod_f_f_w_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rod_f_f_w_f32m4_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c b/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c index 1b2ab7538..274f13d60 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c @@ -6,482 +6,602 @@ #include -vint8mf8_t test_vfncvt_rtz_x_f_w_i8mf8_tu(vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_rtz_x_f_w_i8mf8_tu(vint8mf8_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf8_tu(vd, vs2, vl); } -vint8mf4_t test_vfncvt_rtz_x_f_w_i8mf4_tu(vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_rtz_x_f_w_i8mf4_tu(vint8mf4_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf4_tu(vd, vs2, vl); } -vint8mf2_t test_vfncvt_rtz_x_f_w_i8mf2_tu(vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_rtz_x_f_w_i8mf2_tu(vint8mf2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf2_tu(vd, vs2, vl); } -vint8m1_t test_vfncvt_rtz_x_f_w_i8m1_tu(vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_rtz_x_f_w_i8m1_tu(vint8m1_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m1_tu(vd, vs2, vl); } -vint8m2_t test_vfncvt_rtz_x_f_w_i8m2_tu(vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_rtz_x_f_w_i8m2_tu(vint8m2_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m2_tu(vd, vs2, vl); } -vint8m4_t test_vfncvt_rtz_x_f_w_i8m4_tu(vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_rtz_x_f_w_i8m4_tu(vint8m4_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m4_tu(vd, vs2, vl); } -vuint8mf8_t test_vfncvt_rtz_xu_f_w_u8mf8_tu(vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_rtz_xu_f_w_u8mf8_tu(vuint8mf8_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vfncvt_rtz_xu_f_w_u8mf4_tu(vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_rtz_xu_f_w_u8mf4_tu(vuint8mf4_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vfncvt_rtz_xu_f_w_u8mf2_tu(vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_rtz_xu_f_w_u8mf2_tu(vuint8mf2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf2_tu(vd, vs2, vl); } -vuint8m1_t test_vfncvt_rtz_xu_f_w_u8m1_tu(vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_rtz_xu_f_w_u8m1_tu(vuint8m1_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m1_tu(vd, vs2, vl); } -vuint8m2_t test_vfncvt_rtz_xu_f_w_u8m2_tu(vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_rtz_xu_f_w_u8m2_tu(vuint8m2_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m2_tu(vd, vs2, vl); } -vuint8m4_t test_vfncvt_rtz_xu_f_w_u8m4_tu(vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_rtz_xu_f_w_u8m4_tu(vuint8m4_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m4_tu(vd, vs2, vl); } -vint16mf4_t test_vfncvt_rtz_x_f_w_i16mf4_tu(vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_rtz_x_f_w_i16mf4_tu(vint16mf4_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16mf4_tu(vd, vs2, vl); } -vint16mf2_t test_vfncvt_rtz_x_f_w_i16mf2_tu(vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_rtz_x_f_w_i16mf2_tu(vint16mf2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16mf2_tu(vd, vs2, vl); } -vint16m1_t test_vfncvt_rtz_x_f_w_i16m1_tu(vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_rtz_x_f_w_i16m1_tu(vint16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m1_tu(vd, vs2, vl); } -vint16m2_t test_vfncvt_rtz_x_f_w_i16m2_tu(vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_rtz_x_f_w_i16m2_tu(vint16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m2_tu(vd, vs2, vl); } -vint16m4_t test_vfncvt_rtz_x_f_w_i16m4_tu(vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_rtz_x_f_w_i16m4_tu(vint16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m4_tu(vd, vs2, vl); } -vuint16mf4_t test_vfncvt_rtz_xu_f_w_u16mf4_tu(vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_rtz_xu_f_w_u16mf4_tu(vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vfncvt_rtz_xu_f_w_u16mf2_tu(vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_rtz_xu_f_w_u16mf2_tu(vuint16mf2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vfncvt_rtz_xu_f_w_u16m1_tu(vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_rtz_xu_f_w_u16m1_tu(vuint16m1_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vfncvt_rtz_xu_f_w_u16m2_tu(vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_rtz_xu_f_w_u16m2_tu(vuint16m2_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vfncvt_rtz_xu_f_w_u16m4_tu(vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_rtz_xu_f_w_u16m4_tu(vuint16m4_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m4_tu(vd, vs2, vl); } -vint32mf2_t test_vfncvt_rtz_x_f_w_i32mf2_tu(vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_rtz_x_f_w_i32mf2_tu(vint32mf2_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32mf2_tu(vd, vs2, vl); } -vint32m1_t test_vfncvt_rtz_x_f_w_i32m1_tu(vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_rtz_x_f_w_i32m1_tu(vint32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m1_tu(vd, vs2, vl); } -vint32m2_t test_vfncvt_rtz_x_f_w_i32m2_tu(vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_rtz_x_f_w_i32m2_tu(vint32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m2_tu(vd, vs2, vl); } -vint32m4_t test_vfncvt_rtz_x_f_w_i32m4_tu(vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_rtz_x_f_w_i32m4_tu(vint32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m4_tu(vd, vs2, vl); } -vuint32mf2_t test_vfncvt_rtz_xu_f_w_u32mf2_tu(vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_rtz_xu_f_w_u32mf2_tu(vuint32mf2_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vfncvt_rtz_xu_f_w_u32m1_tu(vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_rtz_xu_f_w_u32m1_tu(vuint32m1_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vfncvt_rtz_xu_f_w_u32m2_tu(vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_rtz_xu_f_w_u32m2_tu(vuint32m2_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vfncvt_rtz_xu_f_w_u32m4_tu(vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_rtz_xu_f_w_u32m4_tu(vuint32m4_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m4_tu(vd, vs2, vl); } -vint8mf8_t test_vfncvt_rtz_x_f_w_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_rtz_x_f_w_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf8_tum(vm, vd, vs2, vl); } -vint8mf4_t test_vfncvt_rtz_x_f_w_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_rtz_x_f_w_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf4_tum(vm, vd, vs2, vl); } -vint8mf2_t test_vfncvt_rtz_x_f_w_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_rtz_x_f_w_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf2_tum(vm, vd, vs2, vl); } -vint8m1_t test_vfncvt_rtz_x_f_w_i8m1_tum(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_rtz_x_f_w_i8m1_tum(vbool8_t vm, vint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m1_tum(vm, vd, vs2, vl); } -vint8m2_t test_vfncvt_rtz_x_f_w_i8m2_tum(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_rtz_x_f_w_i8m2_tum(vbool4_t vm, vint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m2_tum(vm, vd, vs2, vl); } -vint8m4_t test_vfncvt_rtz_x_f_w_i8m4_tum(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_rtz_x_f_w_i8m4_tum(vbool2_t vm, vint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m4_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vfncvt_rtz_xu_f_w_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_rtz_xu_f_w_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vfncvt_rtz_xu_f_w_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_rtz_xu_f_w_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vfncvt_rtz_xu_f_w_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_rtz_xu_f_w_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vfncvt_rtz_xu_f_w_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_rtz_xu_f_w_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vfncvt_rtz_xu_f_w_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_rtz_xu_f_w_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vfncvt_rtz_xu_f_w_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_rtz_xu_f_w_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m4_tum(vm, vd, vs2, vl); } -vint16mf4_t test_vfncvt_rtz_x_f_w_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_rtz_x_f_w_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16mf4_tum(vm, vd, vs2, vl); } -vint16mf2_t test_vfncvt_rtz_x_f_w_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_rtz_x_f_w_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16mf2_tum(vm, vd, vs2, vl); } -vint16m1_t test_vfncvt_rtz_x_f_w_i16m1_tum(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_rtz_x_f_w_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m1_tum(vm, vd, vs2, vl); } -vint16m2_t test_vfncvt_rtz_x_f_w_i16m2_tum(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_rtz_x_f_w_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m2_tum(vm, vd, vs2, vl); } -vint16m4_t test_vfncvt_rtz_x_f_w_i16m4_tum(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_rtz_x_f_w_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m4_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vfncvt_rtz_xu_f_w_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_rtz_xu_f_w_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vfncvt_rtz_xu_f_w_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_rtz_xu_f_w_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vfncvt_rtz_xu_f_w_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_rtz_xu_f_w_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vfncvt_rtz_xu_f_w_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_rtz_xu_f_w_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vfncvt_rtz_xu_f_w_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_rtz_xu_f_w_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m4_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vfncvt_rtz_x_f_w_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_rtz_x_f_w_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vfncvt_rtz_x_f_w_i32m1_tum(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_rtz_x_f_w_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vfncvt_rtz_x_f_w_i32m2_tum(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_rtz_x_f_w_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vfncvt_rtz_x_f_w_i32m4_tum(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_rtz_x_f_w_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m4_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vfncvt_rtz_xu_f_w_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_rtz_xu_f_w_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vfncvt_rtz_xu_f_w_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_rtz_xu_f_w_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vfncvt_rtz_xu_f_w_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_rtz_xu_f_w_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vfncvt_rtz_xu_f_w_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_rtz_xu_f_w_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m4_tum(vm, vd, vs2, vl); } -vint8mf8_t test_vfncvt_rtz_x_f_w_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_rtz_x_f_w_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf8_tumu(vm, vd, vs2, vl); } -vint8mf4_t test_vfncvt_rtz_x_f_w_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_rtz_x_f_w_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf4_tumu(vm, vd, vs2, vl); } -vint8mf2_t test_vfncvt_rtz_x_f_w_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_rtz_x_f_w_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf2_tumu(vm, vd, vs2, vl); } -vint8m1_t test_vfncvt_rtz_x_f_w_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_rtz_x_f_w_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m1_tumu(vm, vd, vs2, vl); } -vint8m2_t test_vfncvt_rtz_x_f_w_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_rtz_x_f_w_i8m2_tumu(vbool4_t vm, vint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m2_tumu(vm, vd, vs2, vl); } -vint8m4_t test_vfncvt_rtz_x_f_w_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_rtz_x_f_w_i8m4_tumu(vbool2_t vm, vint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m4_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vfncvt_rtz_xu_f_w_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_rtz_xu_f_w_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vfncvt_rtz_xu_f_w_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_rtz_xu_f_w_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vfncvt_rtz_xu_f_w_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_rtz_xu_f_w_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vfncvt_rtz_xu_f_w_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_rtz_xu_f_w_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vfncvt_rtz_xu_f_w_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_rtz_xu_f_w_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vfncvt_rtz_xu_f_w_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_rtz_xu_f_w_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m4_tumu(vm, vd, vs2, vl); } -vint16mf4_t test_vfncvt_rtz_x_f_w_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_rtz_x_f_w_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16mf4_tumu(vm, vd, vs2, vl); } -vint16mf2_t test_vfncvt_rtz_x_f_w_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_rtz_x_f_w_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16mf2_tumu(vm, vd, vs2, vl); } -vint16m1_t test_vfncvt_rtz_x_f_w_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_rtz_x_f_w_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m1_tumu(vm, vd, vs2, vl); } -vint16m2_t test_vfncvt_rtz_x_f_w_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_rtz_x_f_w_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m2_tumu(vm, vd, vs2, vl); } -vint16m4_t test_vfncvt_rtz_x_f_w_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_rtz_x_f_w_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m4_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfncvt_rtz_xu_f_w_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_rtz_xu_f_w_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfncvt_rtz_xu_f_w_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_rtz_xu_f_w_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vfncvt_rtz_xu_f_w_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_rtz_xu_f_w_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vfncvt_rtz_xu_f_w_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_rtz_xu_f_w_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vfncvt_rtz_xu_f_w_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_rtz_xu_f_w_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m4_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vfncvt_rtz_x_f_w_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_rtz_x_f_w_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vfncvt_rtz_x_f_w_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_rtz_x_f_w_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vfncvt_rtz_x_f_w_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_rtz_x_f_w_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vfncvt_rtz_x_f_w_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_rtz_x_f_w_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m4_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfncvt_rtz_xu_f_w_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_rtz_xu_f_w_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vfncvt_rtz_xu_f_w_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_rtz_xu_f_w_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vfncvt_rtz_xu_f_w_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_rtz_xu_f_w_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vfncvt_rtz_xu_f_w_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_rtz_xu_f_w_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m4_tumu(vm, vd, vs2, vl); } -vint8mf8_t test_vfncvt_rtz_x_f_w_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vint8mf8_t test_vfncvt_rtz_x_f_w_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf8_mu(vm, vd, vs2, vl); } -vint8mf4_t test_vfncvt_rtz_x_f_w_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vint8mf4_t test_vfncvt_rtz_x_f_w_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf4_mu(vm, vd, vs2, vl); } -vint8mf2_t test_vfncvt_rtz_x_f_w_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vint8mf2_t test_vfncvt_rtz_x_f_w_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8mf2_mu(vm, vd, vs2, vl); } -vint8m1_t test_vfncvt_rtz_x_f_w_i8m1_mu(vbool8_t vm, vint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vint8m1_t test_vfncvt_rtz_x_f_w_i8m1_mu(vbool8_t vm, vint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m1_mu(vm, vd, vs2, vl); } -vint8m2_t test_vfncvt_rtz_x_f_w_i8m2_mu(vbool4_t vm, vint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vint8m2_t test_vfncvt_rtz_x_f_w_i8m2_mu(vbool4_t vm, vint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m2_mu(vm, vd, vs2, vl); } -vint8m4_t test_vfncvt_rtz_x_f_w_i8m4_mu(vbool2_t vm, vint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vint8m4_t test_vfncvt_rtz_x_f_w_i8m4_mu(vbool2_t vm, vint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i8m4_mu(vm, vd, vs2, vl); } -vuint8mf8_t test_vfncvt_rtz_xu_f_w_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vfncvt_rtz_xu_f_w_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vfncvt_rtz_xu_f_w_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vfncvt_rtz_xu_f_w_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vfncvt_rtz_xu_f_w_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint8mf2_t test_vfncvt_rtz_xu_f_w_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vfncvt_rtz_xu_f_w_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vfloat16m2_t vs2, size_t vl) { +vuint8m1_t test_vfncvt_rtz_xu_f_w_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vfncvt_rtz_xu_f_w_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vfloat16m4_t vs2, size_t vl) { +vuint8m2_t test_vfncvt_rtz_xu_f_w_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vfncvt_rtz_xu_f_w_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vfloat16m8_t vs2, size_t vl) { +vuint8m4_t test_vfncvt_rtz_xu_f_w_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u8m4_mu(vm, vd, vs2, vl); } -vint16mf4_t test_vfncvt_rtz_x_f_w_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vint16mf4_t test_vfncvt_rtz_x_f_w_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16mf4_mu(vm, vd, vs2, vl); } -vint16mf2_t test_vfncvt_rtz_x_f_w_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vint16mf2_t test_vfncvt_rtz_x_f_w_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16mf2_mu(vm, vd, vs2, vl); } -vint16m1_t test_vfncvt_rtz_x_f_w_i16m1_mu(vbool16_t vm, vint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vint16m1_t test_vfncvt_rtz_x_f_w_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m1_mu(vm, vd, vs2, vl); } -vint16m2_t test_vfncvt_rtz_x_f_w_i16m2_mu(vbool8_t vm, vint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vint16m2_t test_vfncvt_rtz_x_f_w_i16m2_mu(vbool8_t vm, vint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m2_mu(vm, vd, vs2, vl); } -vint16m4_t test_vfncvt_rtz_x_f_w_i16m4_mu(vbool4_t vm, vint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vint16m4_t test_vfncvt_rtz_x_f_w_i16m4_mu(vbool4_t vm, vint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i16m4_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vfncvt_rtz_xu_f_w_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vfncvt_rtz_xu_f_w_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vfncvt_rtz_xu_f_w_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint16mf2_t test_vfncvt_rtz_xu_f_w_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vfncvt_rtz_xu_f_w_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vuint16m1_t test_vfncvt_rtz_xu_f_w_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vfncvt_rtz_xu_f_w_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vuint16m2_t test_vfncvt_rtz_xu_f_w_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vfncvt_rtz_xu_f_w_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vuint16m4_t test_vfncvt_rtz_xu_f_w_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u16m4_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vfncvt_rtz_x_f_w_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vint32mf2_t test_vfncvt_rtz_x_f_w_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vfncvt_rtz_x_f_w_i32m1_mu(vbool32_t vm, vint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vint32m1_t test_vfncvt_rtz_x_f_w_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vfncvt_rtz_x_f_w_i32m2_mu(vbool16_t vm, vint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vint32m2_t test_vfncvt_rtz_x_f_w_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vfncvt_rtz_x_f_w_i32m4_mu(vbool8_t vm, vint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vint32m4_t test_vfncvt_rtz_x_f_w_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_x_f_w_i32m4_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfncvt_rtz_xu_f_w_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vfloat64m1_t vs2, size_t vl) { +vuint32mf2_t test_vfncvt_rtz_xu_f_w_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vfncvt_rtz_xu_f_w_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vfloat64m2_t vs2, size_t vl) { +vuint32m1_t test_vfncvt_rtz_xu_f_w_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vfncvt_rtz_xu_f_w_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vfloat64m4_t vs2, size_t vl) { +vuint32m2_t test_vfncvt_rtz_xu_f_w_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vfncvt_rtz_xu_f_w_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vfloat64m8_t vs2, size_t vl) { +vuint32m4_t test_vfncvt_rtz_xu_f_w_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfncvt_rtz_xu_f_w_u32m4_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfneg.c b/auto-generated/policy_funcs/llvm-api-tests/vfneg.c index e31e9145e..806b679c2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfneg.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfneg.c @@ -6,242 +6,302 @@ #include -vfloat16mf4_t test_vfneg_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs, size_t vl) { +vfloat16mf4_t test_vfneg_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs, + size_t vl) { return __riscv_vfneg_v_f16mf4_tu(vd, vs, vl); } -vfloat16mf2_t test_vfneg_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs, size_t vl) { +vfloat16mf2_t test_vfneg_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs, + size_t vl) { return __riscv_vfneg_v_f16mf2_tu(vd, vs, vl); } -vfloat16m1_t test_vfneg_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs, size_t vl) { +vfloat16m1_t test_vfneg_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs, + size_t vl) { return __riscv_vfneg_v_f16m1_tu(vd, vs, vl); } -vfloat16m2_t test_vfneg_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs, size_t vl) { +vfloat16m2_t test_vfneg_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs, + size_t vl) { return __riscv_vfneg_v_f16m2_tu(vd, vs, vl); } -vfloat16m4_t test_vfneg_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs, size_t vl) { +vfloat16m4_t test_vfneg_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs, + size_t vl) { return __riscv_vfneg_v_f16m4_tu(vd, vs, vl); } -vfloat16m8_t test_vfneg_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs, size_t vl) { +vfloat16m8_t test_vfneg_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs, + size_t vl) { return __riscv_vfneg_v_f16m8_tu(vd, vs, vl); } -vfloat32mf2_t test_vfneg_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs, size_t vl) { +vfloat32mf2_t test_vfneg_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs, + size_t vl) { return __riscv_vfneg_v_f32mf2_tu(vd, vs, vl); } -vfloat32m1_t test_vfneg_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs, size_t vl) { +vfloat32m1_t test_vfneg_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs, + size_t vl) { return __riscv_vfneg_v_f32m1_tu(vd, vs, vl); } -vfloat32m2_t test_vfneg_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs, size_t vl) { +vfloat32m2_t test_vfneg_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs, + size_t vl) { return __riscv_vfneg_v_f32m2_tu(vd, vs, vl); } -vfloat32m4_t test_vfneg_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs, size_t vl) { +vfloat32m4_t test_vfneg_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs, + size_t vl) { return __riscv_vfneg_v_f32m4_tu(vd, vs, vl); } -vfloat32m8_t test_vfneg_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs, size_t vl) { +vfloat32m8_t test_vfneg_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs, + size_t vl) { return __riscv_vfneg_v_f32m8_tu(vd, vs, vl); } -vfloat64m1_t test_vfneg_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs, size_t vl) { +vfloat64m1_t test_vfneg_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs, + size_t vl) { return __riscv_vfneg_v_f64m1_tu(vd, vs, vl); } -vfloat64m2_t test_vfneg_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs, size_t vl) { +vfloat64m2_t test_vfneg_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs, + size_t vl) { return __riscv_vfneg_v_f64m2_tu(vd, vs, vl); } -vfloat64m4_t test_vfneg_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs, size_t vl) { +vfloat64m4_t test_vfneg_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs, + size_t vl) { return __riscv_vfneg_v_f64m4_tu(vd, vs, vl); } -vfloat64m8_t test_vfneg_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs, size_t vl) { +vfloat64m8_t test_vfneg_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs, + size_t vl) { return __riscv_vfneg_v_f64m8_tu(vd, vs, vl); } -vfloat16mf4_t test_vfneg_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs, size_t vl) { +vfloat16mf4_t test_vfneg_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs, size_t vl) { return __riscv_vfneg_v_f16mf4_tum(vm, vd, vs, vl); } -vfloat16mf2_t test_vfneg_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs, size_t vl) { +vfloat16mf2_t test_vfneg_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs, size_t vl) { return __riscv_vfneg_v_f16mf2_tum(vm, vd, vs, vl); } -vfloat16m1_t test_vfneg_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs, size_t vl) { +vfloat16m1_t test_vfneg_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs, size_t vl) { return __riscv_vfneg_v_f16m1_tum(vm, vd, vs, vl); } -vfloat16m2_t test_vfneg_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs, size_t vl) { +vfloat16m2_t test_vfneg_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs, size_t vl) { return __riscv_vfneg_v_f16m2_tum(vm, vd, vs, vl); } -vfloat16m4_t test_vfneg_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs, size_t vl) { +vfloat16m4_t test_vfneg_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs, size_t vl) { return __riscv_vfneg_v_f16m4_tum(vm, vd, vs, vl); } -vfloat16m8_t test_vfneg_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs, size_t vl) { +vfloat16m8_t test_vfneg_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs, size_t vl) { return __riscv_vfneg_v_f16m8_tum(vm, vd, vs, vl); } -vfloat32mf2_t test_vfneg_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs, size_t vl) { +vfloat32mf2_t test_vfneg_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs, size_t vl) { return __riscv_vfneg_v_f32mf2_tum(vm, vd, vs, vl); } -vfloat32m1_t test_vfneg_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs, size_t vl) { +vfloat32m1_t test_vfneg_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs, size_t vl) { return __riscv_vfneg_v_f32m1_tum(vm, vd, vs, vl); } -vfloat32m2_t test_vfneg_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs, size_t vl) { +vfloat32m2_t test_vfneg_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs, size_t vl) { return __riscv_vfneg_v_f32m2_tum(vm, vd, vs, vl); } -vfloat32m4_t test_vfneg_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs, size_t vl) { +vfloat32m4_t test_vfneg_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs, size_t vl) { return __riscv_vfneg_v_f32m4_tum(vm, vd, vs, vl); } -vfloat32m8_t test_vfneg_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs, size_t vl) { +vfloat32m8_t test_vfneg_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs, size_t vl) { return __riscv_vfneg_v_f32m8_tum(vm, vd, vs, vl); } -vfloat64m1_t test_vfneg_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs, size_t vl) { +vfloat64m1_t test_vfneg_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs, size_t vl) { return __riscv_vfneg_v_f64m1_tum(vm, vd, vs, vl); } -vfloat64m2_t test_vfneg_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs, size_t vl) { +vfloat64m2_t test_vfneg_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs, size_t vl) { return __riscv_vfneg_v_f64m2_tum(vm, vd, vs, vl); } -vfloat64m4_t test_vfneg_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs, size_t vl) { +vfloat64m4_t test_vfneg_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs, size_t vl) { return __riscv_vfneg_v_f64m4_tum(vm, vd, vs, vl); } -vfloat64m8_t test_vfneg_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs, size_t vl) { +vfloat64m8_t test_vfneg_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs, size_t vl) { return __riscv_vfneg_v_f64m8_tum(vm, vd, vs, vl); } -vfloat16mf4_t test_vfneg_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs, size_t vl) { +vfloat16mf4_t test_vfneg_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs, size_t vl) { return __riscv_vfneg_v_f16mf4_tumu(vm, vd, vs, vl); } -vfloat16mf2_t test_vfneg_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs, size_t vl) { +vfloat16mf2_t test_vfneg_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs, size_t vl) { return __riscv_vfneg_v_f16mf2_tumu(vm, vd, vs, vl); } -vfloat16m1_t test_vfneg_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs, size_t vl) { +vfloat16m1_t test_vfneg_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs, size_t vl) { return __riscv_vfneg_v_f16m1_tumu(vm, vd, vs, vl); } -vfloat16m2_t test_vfneg_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs, size_t vl) { +vfloat16m2_t test_vfneg_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs, size_t vl) { return __riscv_vfneg_v_f16m2_tumu(vm, vd, vs, vl); } -vfloat16m4_t test_vfneg_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs, size_t vl) { +vfloat16m4_t test_vfneg_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs, size_t vl) { return __riscv_vfneg_v_f16m4_tumu(vm, vd, vs, vl); } -vfloat16m8_t test_vfneg_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs, size_t vl) { +vfloat16m8_t test_vfneg_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs, size_t vl) { return __riscv_vfneg_v_f16m8_tumu(vm, vd, vs, vl); } -vfloat32mf2_t test_vfneg_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs, size_t vl) { +vfloat32mf2_t test_vfneg_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs, size_t vl) { return __riscv_vfneg_v_f32mf2_tumu(vm, vd, vs, vl); } -vfloat32m1_t test_vfneg_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs, size_t vl) { +vfloat32m1_t test_vfneg_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs, size_t vl) { return __riscv_vfneg_v_f32m1_tumu(vm, vd, vs, vl); } -vfloat32m2_t test_vfneg_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs, size_t vl) { +vfloat32m2_t test_vfneg_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs, size_t vl) { return __riscv_vfneg_v_f32m2_tumu(vm, vd, vs, vl); } -vfloat32m4_t test_vfneg_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs, size_t vl) { +vfloat32m4_t test_vfneg_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs, size_t vl) { return __riscv_vfneg_v_f32m4_tumu(vm, vd, vs, vl); } -vfloat32m8_t test_vfneg_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs, size_t vl) { +vfloat32m8_t test_vfneg_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs, size_t vl) { return __riscv_vfneg_v_f32m8_tumu(vm, vd, vs, vl); } -vfloat64m1_t test_vfneg_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs, size_t vl) { +vfloat64m1_t test_vfneg_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs, size_t vl) { return __riscv_vfneg_v_f64m1_tumu(vm, vd, vs, vl); } -vfloat64m2_t test_vfneg_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs, size_t vl) { +vfloat64m2_t test_vfneg_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs, size_t vl) { return __riscv_vfneg_v_f64m2_tumu(vm, vd, vs, vl); } -vfloat64m4_t test_vfneg_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs, size_t vl) { +vfloat64m4_t test_vfneg_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs, size_t vl) { return __riscv_vfneg_v_f64m4_tumu(vm, vd, vs, vl); } -vfloat64m8_t test_vfneg_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs, size_t vl) { +vfloat64m8_t test_vfneg_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs, size_t vl) { return __riscv_vfneg_v_f64m8_tumu(vm, vd, vs, vl); } -vfloat16mf4_t test_vfneg_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs, size_t vl) { +vfloat16mf4_t test_vfneg_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs, size_t vl) { return __riscv_vfneg_v_f16mf4_mu(vm, vd, vs, vl); } -vfloat16mf2_t test_vfneg_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs, size_t vl) { +vfloat16mf2_t test_vfneg_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs, size_t vl) { return __riscv_vfneg_v_f16mf2_mu(vm, vd, vs, vl); } -vfloat16m1_t test_vfneg_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs, size_t vl) { +vfloat16m1_t test_vfneg_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs, size_t vl) { return __riscv_vfneg_v_f16m1_mu(vm, vd, vs, vl); } -vfloat16m2_t test_vfneg_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs, size_t vl) { +vfloat16m2_t test_vfneg_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs, size_t vl) { return __riscv_vfneg_v_f16m2_mu(vm, vd, vs, vl); } -vfloat16m4_t test_vfneg_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs, size_t vl) { +vfloat16m4_t test_vfneg_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs, size_t vl) { return __riscv_vfneg_v_f16m4_mu(vm, vd, vs, vl); } -vfloat16m8_t test_vfneg_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs, size_t vl) { +vfloat16m8_t test_vfneg_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs, size_t vl) { return __riscv_vfneg_v_f16m8_mu(vm, vd, vs, vl); } -vfloat32mf2_t test_vfneg_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs, size_t vl) { +vfloat32mf2_t test_vfneg_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs, size_t vl) { return __riscv_vfneg_v_f32mf2_mu(vm, vd, vs, vl); } -vfloat32m1_t test_vfneg_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs, size_t vl) { +vfloat32m1_t test_vfneg_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs, size_t vl) { return __riscv_vfneg_v_f32m1_mu(vm, vd, vs, vl); } -vfloat32m2_t test_vfneg_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs, size_t vl) { +vfloat32m2_t test_vfneg_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs, size_t vl) { return __riscv_vfneg_v_f32m2_mu(vm, vd, vs, vl); } -vfloat32m4_t test_vfneg_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs, size_t vl) { +vfloat32m4_t test_vfneg_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs, size_t vl) { return __riscv_vfneg_v_f32m4_mu(vm, vd, vs, vl); } -vfloat32m8_t test_vfneg_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs, size_t vl) { +vfloat32m8_t test_vfneg_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs, size_t vl) { return __riscv_vfneg_v_f32m8_mu(vm, vd, vs, vl); } -vfloat64m1_t test_vfneg_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs, size_t vl) { +vfloat64m1_t test_vfneg_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs, size_t vl) { return __riscv_vfneg_v_f64m1_mu(vm, vd, vs, vl); } -vfloat64m2_t test_vfneg_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs, size_t vl) { +vfloat64m2_t test_vfneg_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs, size_t vl) { return __riscv_vfneg_v_f64m2_mu(vm, vd, vs, vl); } -vfloat64m4_t test_vfneg_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs, size_t vl) { +vfloat64m4_t test_vfneg_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs, size_t vl) { return __riscv_vfneg_v_f64m4_mu(vm, vd, vs, vl); } -vfloat64m8_t test_vfneg_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs, size_t vl) { +vfloat64m8_t test_vfneg_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs, size_t vl) { return __riscv_vfneg_v_f64m8_mu(vm, vd, vs, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c index 23fde2e4c..b0320d93d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c @@ -6,962 +6,1403 @@ #include -vfloat16mf4_t test_vfnmacc_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16mf4_tu(vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmacc_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16mf4_tu(vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmacc_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16mf2_tu(vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmacc_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16mf2_tu(vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmacc_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16m1_tu(vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmacc_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16m1_tu(vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmacc_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16m2_tu(vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmacc_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16m2_tu(vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmacc_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16m4_tu(vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmacc_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16m4_tu(vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmacc_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16m8_tu(vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmacc_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16m8_tu(vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmacc_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmacc_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32mf2_tu(vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmacc_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmacc_vf_f32m1_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vf_f32m1_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m1_tu(vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmacc_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmacc_vf_f32m2_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vf_f32m2_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m2_tu(vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmacc_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmacc_vf_f32m4_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vf_f32m4_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m4_tu(vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmacc_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmacc_vf_f32m8_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vf_f32m8_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m8_tu(vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmacc_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmacc_vf_f64m1_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vf_f64m1_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m1_tu(vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmacc_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmacc_vf_f64m2_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vf_f64m2_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m2_tu(vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmacc_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmacc_vf_f64m4_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vf_f64m4_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m4_tu(vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmacc_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmacc_vf_f64m8_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vf_f64m8_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m8_tu(vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmacc_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16mf4_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmacc_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16mf4_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmacc_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmacc_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmacc_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m1_tum(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmacc_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m1_tum(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmacc_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m2_tum(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmacc_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmacc_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m4_tum(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmacc_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m4_tum(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmacc_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m8_tum(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmacc_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m8_tum(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmacc_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmacc_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmacc_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmacc_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m1_tum(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmacc_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmacc_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmacc_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmacc_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m4_tum(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmacc_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmacc_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m8_tum(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmacc_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmacc_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m1_tum(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmacc_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmacc_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m2_tum(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmacc_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmacc_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m4_tum(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmacc_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmacc_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m8_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmacc_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16mf4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmacc_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16mf4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmacc_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmacc_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmacc_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmacc_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmacc_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmacc_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmacc_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmacc_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmacc_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmacc_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmacc_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmacc_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmacc_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmacc_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmacc_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmacc_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmacc_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmacc_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmacc_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmacc_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmacc_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmacc_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmacc_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmacc_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmacc_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmacc_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmacc_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmacc_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmacc_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16mf4_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmacc_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16mf4_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmacc_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmacc_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmacc_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m1_mu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmacc_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m1_mu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmacc_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m2_mu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmacc_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmacc_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m4_mu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmacc_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m4_mu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmacc_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m8_mu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmacc_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m8_mu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmacc_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmacc_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmacc_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmacc_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m1_mu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmacc_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmacc_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmacc_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmacc_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m4_mu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmacc_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmacc_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m8_mu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmacc_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmacc_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m1_mu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmacc_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmacc_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m2_mu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmacc_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmacc_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m4_mu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmacc_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmacc_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m8_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmacc_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16mf4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmacc_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16mf4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmacc_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmacc_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmacc_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmacc_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmacc_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmacc_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmacc_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmacc_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmacc_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f16m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmacc_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f16m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmacc_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmacc_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmacc_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmacc_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmacc_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmacc_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmacc_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmacc_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmacc_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmacc_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f32m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmacc_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmacc_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmacc_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmacc_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmacc_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmacc_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmacc_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmacc_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmacc_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmacc_vf_f64m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmacc_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmacc_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfnmacc_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmacc_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmacc_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmacc_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmacc_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfnmacc_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmacc_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmacc_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmacc_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmacc_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmacc_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmacc_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmacc_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmacc_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmacc_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmacc_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmacc_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmacc_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfnmacc_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmacc_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmacc_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmacc_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmacc_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmacc_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmacc_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmacc_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmacc_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmacc_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmacc_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmacc_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmacc_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmacc_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmacc_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmacc_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmacc_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmacc_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmacc_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmacc_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmacc_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfnmacc_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmacc_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmacc_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmacc_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmacc_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfnmacc_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmacc_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmacc_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmacc_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfnmacc_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmacc_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfnmacc_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m2_t test_vfnmacc_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m2_t test_vfnmacc_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m2_t test_vfnmacc_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m2_t test_vfnmacc_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m4_t test_vfnmacc_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m4_t test_vfnmacc_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m4_t test_vfnmacc_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m4_t test_vfnmacc_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m8_t test_vfnmacc_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m8_t test_vfnmacc_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m8_t test_vfnmacc_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m8_t test_vfnmacc_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmacc_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmacc_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfnmacc_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmacc_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmacc_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmacc_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfnmacc_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmacc_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfnmacc_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfnmacc_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfnmacc_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfnmacc_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfnmacc_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfnmacc_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfnmacc_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfnmacc_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfnmacc_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfnmacc_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfnmacc_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfnmacc_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfnmacc_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfnmacc_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfnmacc_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfnmacc_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfnmacc_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfnmacc_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfnmacc_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfnmacc_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfnmacc_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfnmacc_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfnmacc_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfnmacc_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfnmacc_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfnmacc_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { - return __riscv_vfnmacc_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfnmacc_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { + return __riscv_vfnmacc_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfnmacc_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { - return __riscv_vfnmacc_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfnmacc_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { + return __riscv_vfnmacc_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmacc_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16mf4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmacc_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmacc_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16mf4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmacc_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmacc_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmacc_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmacc_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmacc_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmacc_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmacc_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmacc_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmacc_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmacc_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmacc_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmacc_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmacc_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f16m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmacc_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmacc_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f16m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmacc_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmacc_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmacc_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmacc_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmacc_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmacc_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmacc_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmacc_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmacc_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmacc_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmacc_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmacc_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmacc_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmacc_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmacc_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f32m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmacc_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmacc_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmacc_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmacc_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmacc_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmacc_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmacc_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmacc_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmacc_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmacc_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmacc_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmacc_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmacc_vf_f64m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c b/auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c index 06bc673a9..e3760a218 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c @@ -6,962 +6,1403 @@ #include -vfloat16mf4_t test_vfnmadd_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16mf4_tu(vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmadd_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16mf4_tu(vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmadd_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16mf2_tu(vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmadd_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16mf2_tu(vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmadd_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16m1_tu(vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmadd_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16m1_tu(vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmadd_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16m2_tu(vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmadd_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16m2_tu(vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmadd_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16m4_tu(vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmadd_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16m4_tu(vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmadd_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16m8_tu(vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmadd_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16m8_tu(vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmadd_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmadd_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32mf2_tu(vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmadd_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmadd_vf_f32m1_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vf_f32m1_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m1_tu(vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmadd_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmadd_vf_f32m2_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vf_f32m2_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m2_tu(vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmadd_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmadd_vf_f32m4_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vf_f32m4_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m4_tu(vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmadd_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmadd_vf_f32m8_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vf_f32m8_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m8_tu(vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmadd_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmadd_vf_f64m1_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vf_f64m1_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m1_tu(vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmadd_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmadd_vf_f64m2_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vf_f64m2_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m2_tu(vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmadd_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmadd_vf_f64m4_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vf_f64m4_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m4_tu(vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmadd_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmadd_vf_f64m8_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vf_f64m8_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m8_tu(vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmadd_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16mf4_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmadd_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16mf4_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmadd_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmadd_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmadd_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m1_tum(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmadd_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m1_tum(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmadd_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m2_tum(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmadd_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmadd_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m4_tum(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmadd_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m4_tum(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmadd_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m8_tum(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmadd_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m8_tum(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmadd_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmadd_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmadd_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmadd_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m1_tum(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmadd_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmadd_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmadd_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmadd_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m4_tum(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmadd_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmadd_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m8_tum(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmadd_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmadd_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m1_tum(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmadd_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmadd_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m2_tum(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmadd_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmadd_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m4_tum(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmadd_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmadd_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m8_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmadd_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16mf4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmadd_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16mf4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmadd_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmadd_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmadd_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmadd_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmadd_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmadd_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmadd_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmadd_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmadd_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmadd_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmadd_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmadd_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmadd_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmadd_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmadd_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmadd_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmadd_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmadd_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmadd_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmadd_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmadd_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmadd_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmadd_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmadd_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmadd_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmadd_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmadd_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmadd_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmadd_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16mf4_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmadd_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16mf4_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmadd_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmadd_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmadd_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m1_mu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmadd_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m1_mu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmadd_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m2_mu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmadd_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmadd_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m4_mu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmadd_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m4_mu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmadd_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m8_mu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmadd_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m8_mu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmadd_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmadd_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmadd_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmadd_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m1_mu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmadd_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmadd_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmadd_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmadd_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m4_mu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmadd_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmadd_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m8_mu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmadd_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmadd_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m1_mu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmadd_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmadd_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m2_mu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmadd_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmadd_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m4_mu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmadd_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmadd_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m8_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmadd_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16mf4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmadd_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16mf4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmadd_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmadd_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmadd_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmadd_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmadd_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmadd_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmadd_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmadd_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmadd_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f16m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmadd_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f16m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmadd_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmadd_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmadd_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmadd_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmadd_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmadd_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmadd_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmadd_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmadd_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmadd_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f32m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmadd_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmadd_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmadd_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmadd_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmadd_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmadd_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmadd_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmadd_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmadd_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmadd_vf_f64m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmadd_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmadd_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfnmadd_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmadd_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmadd_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmadd_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmadd_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfnmadd_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmadd_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmadd_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmadd_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmadd_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmadd_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmadd_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmadd_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmadd_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmadd_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmadd_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmadd_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmadd_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfnmadd_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmadd_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmadd_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmadd_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmadd_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmadd_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmadd_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmadd_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmadd_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmadd_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmadd_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmadd_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmadd_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmadd_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmadd_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmadd_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmadd_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmadd_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmadd_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmadd_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmadd_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfnmadd_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmadd_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmadd_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmadd_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmadd_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfnmadd_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmadd_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmadd_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmadd_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfnmadd_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmadd_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfnmadd_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m2_t test_vfnmadd_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m2_t test_vfnmadd_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m2_t test_vfnmadd_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m2_t test_vfnmadd_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m4_t test_vfnmadd_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m4_t test_vfnmadd_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m4_t test_vfnmadd_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m4_t test_vfnmadd_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m8_t test_vfnmadd_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m8_t test_vfnmadd_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m8_t test_vfnmadd_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m8_t test_vfnmadd_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmadd_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmadd_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfnmadd_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmadd_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmadd_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmadd_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfnmadd_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmadd_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfnmadd_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfnmadd_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfnmadd_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfnmadd_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfnmadd_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfnmadd_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfnmadd_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfnmadd_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfnmadd_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfnmadd_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfnmadd_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfnmadd_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfnmadd_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfnmadd_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfnmadd_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfnmadd_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfnmadd_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfnmadd_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfnmadd_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfnmadd_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfnmadd_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfnmadd_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfnmadd_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfnmadd_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfnmadd_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfnmadd_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { - return __riscv_vfnmadd_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfnmadd_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { + return __riscv_vfnmadd_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfnmadd_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { - return __riscv_vfnmadd_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfnmadd_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { + return __riscv_vfnmadd_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmadd_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16mf4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmadd_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmadd_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16mf4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmadd_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmadd_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmadd_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmadd_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmadd_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmadd_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmadd_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmadd_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmadd_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmadd_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmadd_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmadd_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmadd_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f16m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmadd_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmadd_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f16m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmadd_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmadd_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmadd_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmadd_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmadd_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmadd_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmadd_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmadd_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmadd_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmadd_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmadd_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmadd_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmadd_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmadd_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmadd_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f32m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmadd_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmadd_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmadd_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmadd_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmadd_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmadd_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmadd_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmadd_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmadd_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmadd_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmadd_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmadd_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmadd_vf_f64m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c index b8fa9d237..7f5661068 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c @@ -6,962 +6,1403 @@ #include -vfloat16mf4_t test_vfnmsac_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16mf4_tu(vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmsac_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16mf4_tu(vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmsac_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16mf2_tu(vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmsac_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16mf2_tu(vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmsac_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16m1_tu(vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmsac_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16m1_tu(vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmsac_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16m2_tu(vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmsac_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16m2_tu(vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmsac_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16m4_tu(vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmsac_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16m4_tu(vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmsac_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16m8_tu(vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmsac_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16m8_tu(vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmsac_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmsac_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32mf2_tu(vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmsac_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmsac_vf_f32m1_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vf_f32m1_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m1_tu(vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmsac_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmsac_vf_f32m2_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vf_f32m2_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m2_tu(vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmsac_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmsac_vf_f32m4_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vf_f32m4_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m4_tu(vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmsac_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmsac_vf_f32m8_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vf_f32m8_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m8_tu(vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmsac_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmsac_vf_f64m1_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vf_f64m1_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m1_tu(vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmsac_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmsac_vf_f64m2_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vf_f64m2_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m2_tu(vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmsac_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmsac_vf_f64m4_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vf_f64m4_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m4_tu(vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmsac_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmsac_vf_f64m8_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vf_f64m8_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m8_tu(vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmsac_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16mf4_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmsac_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16mf4_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmsac_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmsac_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmsac_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m1_tum(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmsac_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m1_tum(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmsac_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m2_tum(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmsac_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmsac_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m4_tum(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmsac_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m4_tum(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmsac_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m8_tum(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmsac_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m8_tum(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmsac_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmsac_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmsac_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmsac_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m1_tum(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmsac_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmsac_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmsac_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmsac_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m4_tum(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmsac_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmsac_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m8_tum(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmsac_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmsac_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m1_tum(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmsac_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmsac_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m2_tum(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmsac_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmsac_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m4_tum(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmsac_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmsac_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m8_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmsac_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16mf4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmsac_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16mf4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmsac_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmsac_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmsac_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmsac_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmsac_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmsac_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmsac_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmsac_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmsac_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmsac_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmsac_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmsac_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmsac_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmsac_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmsac_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmsac_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmsac_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmsac_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmsac_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmsac_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmsac_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmsac_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmsac_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmsac_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmsac_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmsac_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmsac_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmsac_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmsac_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16mf4_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmsac_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16mf4_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmsac_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmsac_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmsac_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m1_mu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmsac_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m1_mu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmsac_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m2_mu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmsac_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmsac_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m4_mu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmsac_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m4_mu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmsac_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m8_mu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmsac_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m8_mu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmsac_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmsac_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmsac_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmsac_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m1_mu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmsac_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmsac_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmsac_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmsac_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m4_mu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmsac_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmsac_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m8_mu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmsac_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmsac_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m1_mu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmsac_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmsac_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m2_mu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmsac_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmsac_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m4_mu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmsac_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmsac_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m8_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmsac_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16mf4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmsac_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16mf4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmsac_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmsac_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsac_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsac_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsac_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsac_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsac_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsac_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsac_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f16m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsac_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f16m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsac_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsac_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsac_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsac_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsac_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsac_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsac_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsac_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsac_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsac_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f32m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsac_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsac_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsac_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsac_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsac_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsac_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsac_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsac_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsac_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsac_vf_f64m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmsac_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmsac_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfnmsac_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmsac_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmsac_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmsac_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmsac_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfnmsac_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmsac_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmsac_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmsac_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsac_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsac_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsac_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsac_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsac_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsac_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsac_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsac_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmsac_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfnmsac_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmsac_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmsac_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmsac_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsac_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsac_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsac_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsac_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsac_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsac_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsac_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsac_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsac_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsac_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsac_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsac_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsac_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsac_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsac_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmsac_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmsac_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfnmsac_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmsac_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmsac_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmsac_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmsac_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfnmsac_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmsac_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmsac_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmsac_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfnmsac_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmsac_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfnmsac_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m2_t test_vfnmsac_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m2_t test_vfnmsac_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m2_t test_vfnmsac_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m2_t test_vfnmsac_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m4_t test_vfnmsac_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m4_t test_vfnmsac_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m4_t test_vfnmsac_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m4_t test_vfnmsac_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m8_t test_vfnmsac_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m8_t test_vfnmsac_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m8_t test_vfnmsac_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m8_t test_vfnmsac_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmsac_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmsac_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfnmsac_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmsac_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmsac_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmsac_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfnmsac_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmsac_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfnmsac_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfnmsac_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfnmsac_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfnmsac_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfnmsac_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfnmsac_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfnmsac_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfnmsac_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfnmsac_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfnmsac_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfnmsac_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfnmsac_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfnmsac_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfnmsac_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfnmsac_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfnmsac_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfnmsac_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfnmsac_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfnmsac_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfnmsac_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfnmsac_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfnmsac_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfnmsac_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfnmsac_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfnmsac_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfnmsac_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { - return __riscv_vfnmsac_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfnmsac_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { + return __riscv_vfnmsac_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfnmsac_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { - return __riscv_vfnmsac_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfnmsac_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { + return __riscv_vfnmsac_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmsac_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16mf4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmsac_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsac_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16mf4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmsac_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmsac_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsac_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsac_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsac_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsac_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsac_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsac_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsac_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsac_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsac_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsac_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsac_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f16m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsac_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsac_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f16m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsac_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsac_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsac_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsac_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsac_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsac_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsac_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsac_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsac_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsac_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsac_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsac_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsac_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsac_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsac_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f32m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsac_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsac_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsac_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsac_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsac_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsac_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsac_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsac_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsac_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsac_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsac_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsac_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsac_vf_f64m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c index acaf81233..529eed440 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c @@ -6,962 +6,1403 @@ #include -vfloat16mf4_t test_vfnmsub_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16mf4_tu(vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmsub_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vf_f16mf4_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16mf4_tu(vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmsub_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16mf2_tu(vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmsub_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vf_f16mf2_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16mf2_tu(vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmsub_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16m1_tu(vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmsub_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vf_f16m1_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16m1_tu(vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmsub_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16m2_tu(vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmsub_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vf_f16m2_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16m2_tu(vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmsub_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16m4_tu(vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmsub_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vf_f16m4_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16m4_tu(vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmsub_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16m8_tu(vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmsub_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vf_f16m8_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16m8_tu(vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmsub_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmsub_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vf_f32mf2_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32mf2_tu(vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmsub_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmsub_vf_f32m1_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vf_f32m1_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m1_tu(vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmsub_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmsub_vf_f32m2_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vf_f32m2_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m2_tu(vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmsub_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmsub_vf_f32m4_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vf_f32m4_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m4_tu(vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmsub_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmsub_vf_f32m8_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vf_f32m8_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m8_tu(vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmsub_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmsub_vf_f64m1_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vf_f64m1_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m1_tu(vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmsub_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmsub_vf_f64m2_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vf_f64m2_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m2_tu(vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmsub_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmsub_vf_f64m4_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vf_f64m4_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m4_tu(vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmsub_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmsub_vf_f64m8_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vf_f64m8_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m8_tu(vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmsub_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16mf4_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmsub_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16mf4_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmsub_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmsub_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmsub_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m1_tum(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmsub_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m1_tum(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmsub_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m2_tum(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmsub_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m2_tum(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmsub_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m4_tum(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmsub_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m4_tum(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmsub_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m8_tum(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmsub_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m8_tum(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmsub_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32mf2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmsub_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m1_tum(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmsub_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m2_tum(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmsub_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m4_tum(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmsub_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m8_tum(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmsub_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m1_tum(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmsub_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m2_tum(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmsub_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m4_tum(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmsub_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m8_tum(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmsub_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16mf4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmsub_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16mf4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmsub_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmsub_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmsub_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmsub_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmsub_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmsub_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmsub_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmsub_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmsub_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmsub_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmsub_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32mf2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmsub_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmsub_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmsub_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmsub_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmsub_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m1_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmsub_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m2_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmsub_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m4_tumu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmsub_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m8_tumu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmsub_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16mf4_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf4_t test_vfnmsub_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16mf4_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf2_t test_vfnmsub_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat16mf2_t test_vfnmsub_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m1_t test_vfnmsub_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m1_mu(vm, vd, vs1, vs2, vl); } -vfloat16m1_t test_vfnmsub_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m1_mu(vm, vd, rs1, vs2, vl); } -vfloat16m2_t test_vfnmsub_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m2_mu(vm, vd, vs1, vs2, vl); } -vfloat16m2_t test_vfnmsub_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m2_mu(vm, vd, rs1, vs2, vl); } -vfloat16m4_t test_vfnmsub_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m4_mu(vm, vd, vs1, vs2, vl); } -vfloat16m4_t test_vfnmsub_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m4_mu(vm, vd, rs1, vs2, vl); } -vfloat16m8_t test_vfnmsub_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m8_mu(vm, vd, vs1, vs2, vl); } -vfloat16m8_t test_vfnmsub_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m8_mu(vm, vd, rs1, vs2, vl); } -vfloat32mf2_t test_vfnmsub_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfnmsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32mf2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m1_t test_vfnmsub_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfnmsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m1_mu(vm, vd, rs1, vs2, vl); } -vfloat32m2_t test_vfnmsub_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfnmsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m2_mu(vm, vd, rs1, vs2, vl); } -vfloat32m4_t test_vfnmsub_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfnmsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m4_mu(vm, vd, rs1, vs2, vl); } -vfloat32m8_t test_vfnmsub_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfnmsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m8_mu(vm, vd, rs1, vs2, vl); } -vfloat64m1_t test_vfnmsub_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfnmsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m1_mu(vm, vd, rs1, vs2, vl); } -vfloat64m2_t test_vfnmsub_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfnmsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m2_mu(vm, vd, rs1, vs2, vl); } -vfloat64m4_t test_vfnmsub_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfnmsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m4_mu(vm, vd, rs1, vs2, vl); } -vfloat64m8_t test_vfnmsub_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfnmsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m8_mu(vm, vd, rs1, vs2, vl); } -vfloat16mf4_t test_vfnmsub_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16mf4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmsub_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vf_f16mf4_rm_tu(vfloat16mf4_t vd, _Float16 rs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16mf4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmsub_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmsub_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vf_f16mf2_rm_tu(vfloat16mf2_t vd, _Float16 rs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsub_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsub_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vf_f16m1_rm_tu(vfloat16m1_t vd, _Float16 rs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsub_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsub_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vf_f16m2_rm_tu(vfloat16m2_t vd, _Float16 rs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsub_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsub_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vf_f16m4_rm_tu(vfloat16m4_t vd, _Float16 rs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsub_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f16m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsub_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vf_f16m8_rm_tu(vfloat16m8_t vd, _Float16 rs1, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f16m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsub_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, float rs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32mf2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsub_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsub_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vf_f32m1_rm_tu(vfloat32m1_t vd, float rs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsub_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsub_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vf_f32m2_rm_tu(vfloat32m2_t vd, float rs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsub_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsub_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vf_f32m4_rm_tu(vfloat32m4_t vd, float rs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsub_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsub_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vf_f32m8_rm_tu(vfloat32m8_t vd, float rs1, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f32m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsub_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsub_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vf_f64m1_rm_tu(vfloat64m1_t vd, double rs1, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m1_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsub_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsub_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vf_f64m2_rm_tu(vfloat64m2_t vd, double rs1, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m2_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsub_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsub_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vf_f64m4_rm_tu(vfloat64m4_t vd, double rs1, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m4_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsub_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsub_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsub_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vf_f64m8_rm_tu(vfloat64m8_t vd, double rs1, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfnmsub_vf_f64m8_rm_tu(vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmsub_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmsub_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfnmsub_vv_f16mf4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmsub_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmsub_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f16mf4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmsub_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmsub_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfnmsub_vv_f16mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmsub_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmsub_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f16mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmsub_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsub_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsub_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsub_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsub_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsub_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsub_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsub_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsub_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmsub_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfnmsub_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f32mf2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmsub_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsub_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsub_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsub_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsub_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m1_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsub_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m2_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsub_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m4_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsub_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m8_rm_tum(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmsub_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmsub_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfnmsub_vv_f16mf4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmsub_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfnmsub_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f16mf4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmsub_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmsub_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfnmsub_vv_f16mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfnmsub_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfnmsub_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f16mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmsub_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfnmsub_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f16m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfnmsub_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfnmsub_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f16m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m2_t test_vfnmsub_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m2_t test_vfnmsub_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f16m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m2_t test_vfnmsub_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m2_t test_vfnmsub_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f16m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m4_t test_vfnmsub_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m4_t test_vfnmsub_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f16m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m4_t test_vfnmsub_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m4_t test_vfnmsub_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f16m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m8_t test_vfnmsub_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m8_t test_vfnmsub_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f16m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16m8_t test_vfnmsub_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat16m8_t test_vfnmsub_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f16m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmsub_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmsub_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfnmsub_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfnmsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfnmsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f32mf2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmsub_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfnmsub_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfnmsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfnmsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f32m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfnmsub_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfnmsub_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfnmsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfnmsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f32m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfnmsub_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfnmsub_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfnmsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfnmsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f32m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfnmsub_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfnmsub_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfnmsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfnmsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f32m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfnmsub_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfnmsub_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfnmsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfnmsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f64m1_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfnmsub_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfnmsub_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfnmsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfnmsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f64m2_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfnmsub_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfnmsub_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfnmsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfnmsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f64m4_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfnmsub_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { - return __riscv_vfnmsub_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfnmsub_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { + return __riscv_vfnmsub_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfnmsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { - return __riscv_vfnmsub_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfnmsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { + return __riscv_vfnmsub_vf_f64m8_rm_tumu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat16mf4_t test_vfnmsub_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16mf4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfnmsub_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, _Float16 rs1, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfnmsub_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + _Float16 rs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16mf4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmsub_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfnmsub_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, _Float16 rs1, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfnmsub_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + _Float16 rs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsub_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfnmsub_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, _Float16 rs1, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfnmsub_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + _Float16 rs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsub_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfnmsub_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, _Float16 rs1, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfnmsub_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + _Float16 rs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsub_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfnmsub_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, _Float16 rs1, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfnmsub_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + _Float16 rs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsub_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f16m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfnmsub_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, _Float16 rs1, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfnmsub_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + _Float16 rs1, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f16m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsub_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfnmsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, float rs1, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfnmsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + float rs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32mf2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsub_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfnmsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, float rs1, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfnmsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + float rs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsub_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfnmsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, float rs1, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfnmsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + float rs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsub_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfnmsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, float rs1, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfnmsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + float rs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsub_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfnmsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, float rs1, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfnmsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + float rs1, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f32m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsub_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfnmsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, double rs1, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfnmsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + double rs1, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m1_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsub_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfnmsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, double rs1, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfnmsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + double rs1, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m2_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsub_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfnmsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, double rs1, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfnmsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + double rs1, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m4_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsub_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfnmsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, double rs1, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfnmsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + double rs1, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfnmsub_vf_f64m8_rm_mu(vm, vd, rs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c b/auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c index 2a601b0f5..e4dad4b5a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c @@ -6,482 +6,675 @@ #include -vfloat16mf4_t test_vfrdiv_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrdiv_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfrdiv_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrdiv_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfrdiv_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrdiv_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfrdiv_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrdiv_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfrdiv_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrdiv_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfrdiv_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrdiv_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfrdiv_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrdiv_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfrdiv_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrdiv_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfrdiv_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrdiv_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfrdiv_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrdiv_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfrdiv_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrdiv_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfrdiv_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrdiv_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfrdiv_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrdiv_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfrdiv_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrdiv_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfrdiv_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrdiv_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfrdiv_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrdiv_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfrdiv_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrdiv_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfrdiv_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrdiv_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfrdiv_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrdiv_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfrdiv_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrdiv_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfrdiv_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrdiv_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfrdiv_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrdiv_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfrdiv_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrdiv_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfrdiv_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrdiv_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfrdiv_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrdiv_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfrdiv_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrdiv_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfrdiv_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrdiv_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfrdiv_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrdiv_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfrdiv_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrdiv_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfrdiv_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrdiv_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfrdiv_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrdiv_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfrdiv_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrdiv_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfrdiv_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrdiv_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfrdiv_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrdiv_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfrdiv_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrdiv_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfrdiv_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrdiv_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfrdiv_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrdiv_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfrdiv_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrdiv_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfrdiv_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrdiv_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfrdiv_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrdiv_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfrdiv_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrdiv_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfrdiv_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrdiv_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfrdiv_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrdiv_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfrdiv_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrdiv_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfrdiv_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrdiv_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfrdiv_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrdiv_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfrdiv_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrdiv_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfrdiv_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrdiv_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfrdiv_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrdiv_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfrdiv_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrdiv_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfrdiv_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrdiv_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfrdiv_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrdiv_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfrdiv_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrdiv_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfrdiv_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrdiv_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfrdiv_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrdiv_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfrdiv_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrdiv_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfrdiv_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrdiv_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfrdiv_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrdiv_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfrdiv_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrdiv_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfrdiv_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrdiv_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfrdiv_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrdiv_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16mf4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrdiv_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrdiv_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrdiv_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrdiv_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrdiv_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrdiv_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrdiv_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrdiv_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrdiv_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrdiv_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrdiv_vf_f16m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrdiv_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrdiv_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrdiv_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrdiv_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrdiv_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrdiv_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrdiv_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrdiv_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrdiv_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrdiv_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfrdiv_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrdiv_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrdiv_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrdiv_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrdiv_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrdiv_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrdiv_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrdiv_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrdiv_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfrdiv_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrdiv_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrdiv_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrdiv_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrdiv_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrdiv_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrdiv_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrdiv_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrdiv_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrdiv_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrdiv_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrdiv_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrdiv_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrdiv_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrdiv_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrdiv_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrdiv_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrdiv_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrdiv_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrdiv_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrdiv_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrdiv_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrdiv_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrdiv_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrdiv_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrdiv_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrdiv_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrdiv_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrdiv_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrdiv_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrdiv_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrdiv_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfrdiv_vf_f16mf4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfrdiv_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfrdiv_vf_f16mf4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfrdiv_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfrdiv_vf_f16mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfrdiv_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfrdiv_vf_f16mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfrdiv_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrdiv_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrdiv_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrdiv_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrdiv_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrdiv_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrdiv_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrdiv_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrdiv_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { - return __riscv_vfrdiv_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfrdiv_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { + return __riscv_vfrdiv_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfrdiv_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrdiv_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrdiv_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrdiv_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrdiv_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrdiv_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrdiv_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrdiv_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrdiv_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrdiv_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrdiv_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrdiv_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrdiv_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrdiv_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrdiv_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrdiv_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrdiv_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrdiv_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrdiv_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrdiv_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrdiv_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrdiv_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrdiv_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrdiv_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrdiv_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrdiv_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrdiv_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrdiv_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrdiv_vf_f16m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrdiv_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrdiv_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrdiv_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrdiv_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrdiv_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrdiv_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrdiv_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrdiv_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrdiv_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrdiv_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfrdiv_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrdiv_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrdiv_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrdiv_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrdiv_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrdiv_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrdiv_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrdiv_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrdiv_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfrdiv_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfrec7.c b/auto-generated/policy_funcs/llvm-api-tests/vfrec7.c index fd69e548a..6c62463d5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfrec7.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfrec7.c @@ -6,482 +6,602 @@ #include -vfloat16mf4_t test_vfrec7_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrec7_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfrec7_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrec7_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfrec7_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrec7_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfrec7_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrec7_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfrec7_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrec7_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16m4_tu(vd, vs2, vl); } -vfloat16m8_t test_vfrec7_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrec7_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfrec7_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrec7_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfrec7_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrec7_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfrec7_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrec7_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfrec7_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrec7_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfrec7_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrec7_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfrec7_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrec7_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfrec7_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfrec7_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrec7_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfrec7_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrec7_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfrec7_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfrec7_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrec7_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfrec7_v_f64m8_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfrec7_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrec7_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfrec7_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrec7_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfrec7_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrec7_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfrec7_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrec7_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfrec7_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrec7_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m4_tum(vm, vd, vs2, vl); } -vfloat16m8_t test_vfrec7_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrec7_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfrec7_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrec7_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfrec7_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrec7_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfrec7_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrec7_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfrec7_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrec7_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfrec7_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrec7_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfrec7_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrec7_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfrec7_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrec7_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfrec7_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrec7_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfrec7_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrec7_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m8_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfrec7_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrec7_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfrec7_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrec7_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfrec7_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrec7_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfrec7_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrec7_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfrec7_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrec7_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfrec7_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrec7_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfrec7_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrec7_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfrec7_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrec7_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfrec7_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrec7_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfrec7_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrec7_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfrec7_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrec7_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfrec7_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrec7_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfrec7_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrec7_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfrec7_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrec7_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfrec7_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrec7_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m8_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfrec7_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrec7_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfrec7_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrec7_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfrec7_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrec7_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfrec7_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrec7_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfrec7_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrec7_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m4_mu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfrec7_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrec7_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfrec7_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrec7_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfrec7_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrec7_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfrec7_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrec7_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfrec7_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrec7_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfrec7_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrec7_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfrec7_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrec7_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfrec7_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrec7_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfrec7_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrec7_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfrec7_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrec7_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m8_mu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfrec7_v_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrec7_v_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrec7_v_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrec7_v_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrec7_v_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrec7_v_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrec7_v_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrec7_v_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrec7_v_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrec7_v_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrec7_v_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrec7_v_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfrec7_v_f16m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrec7_v_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrec7_v_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrec7_v_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrec7_v_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrec7_v_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrec7_v_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrec7_v_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrec7_v_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrec7_v_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrec7_v_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfrec7_v_f32m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrec7_v_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrec7_v_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfrec7_v_f64m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrec7_v_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrec7_v_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfrec7_v_f64m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrec7_v_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrec7_v_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfrec7_v_f64m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrec7_v_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrec7_v_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfrec7_v_f64m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrec7_v_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrec7_v_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrec7_v_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrec7_v_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrec7_v_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrec7_v_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrec7_v_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrec7_v_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrec7_v_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrec7_v_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrec7_v_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrec7_v_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrec7_v_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrec7_v_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrec7_v_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrec7_v_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrec7_v_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrec7_v_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrec7_v_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrec7_v_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrec7_v_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrec7_v_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrec7_v_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrec7_v_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrec7_v_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrec7_v_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrec7_v_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrec7_v_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrec7_v_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrec7_v_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrec7_v_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrec7_v_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrec7_v_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrec7_v_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrec7_v_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrec7_v_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrec7_v_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrec7_v_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrec7_v_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrec7_v_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrec7_v_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrec7_v_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrec7_v_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrec7_v_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrec7_v_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrec7_v_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrec7_v_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrec7_v_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrec7_v_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrec7_v_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrec7_v_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrec7_v_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrec7_v_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrec7_v_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrec7_v_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrec7_v_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrec7_v_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrec7_v_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrec7_v_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrec7_v_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrec7_v_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrec7_v_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrec7_v_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrec7_v_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrec7_v_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrec7_v_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrec7_v_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrec7_v_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrec7_v_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrec7_v_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrec7_v_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrec7_v_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f16m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrec7_v_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrec7_v_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrec7_v_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrec7_v_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrec7_v_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrec7_v_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrec7_v_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrec7_v_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrec7_v_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrec7_v_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f32m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrec7_v_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrec7_v_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrec7_v_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrec7_v_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrec7_v_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrec7_v_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrec7_v_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrec7_v_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrec7_v_f64m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfredmax.c b/auto-generated/policy_funcs/llvm-api-tests/vfredmax.c index 6faade2eb..96e44055c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfredmax.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfredmax.c @@ -6,122 +6,170 @@ #include -vfloat16m1_t test_vfredmax_vs_f16mf4_f16m1_tu(vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16mf4_f16m1_tu(vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16mf4_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16mf2_f16m1_tu(vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16mf2_f16m1_tu(vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16mf2_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16m1_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16m1_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16m1_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16m2_f16m1_tu(vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16m2_f16m1_tu(vfloat16m1_t vd, vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16m2_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16m4_f16m1_tu(vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16m4_f16m1_tu(vfloat16m1_t vd, vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16m4_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16m8_f16m1_tu(vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16m8_f16m1_tu(vfloat16m1_t vd, vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16m8_f16m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32mf2_f32m1_tu(vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32mf2_f32m1_tu(vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32mf2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32m1_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32m1_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32m1_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32m2_f32m1_tu(vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32m2_f32m1_tu(vfloat32m1_t vd, vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32m2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32m4_f32m1_tu(vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32m4_f32m1_tu(vfloat32m1_t vd, vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32m4_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32m8_f32m1_tu(vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32m8_f32m1_tu(vfloat32m1_t vd, vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32m8_f32m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmax_vs_f64m1_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmax_vs_f64m1_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f64m1_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmax_vs_f64m2_f64m1_tu(vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmax_vs_f64m2_f64m1_tu(vfloat64m1_t vd, vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f64m2_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmax_vs_f64m4_f64m1_tu(vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmax_vs_f64m4_f64m1_tu(vfloat64m1_t vd, vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f64m4_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmax_vs_f64m8_f64m1_tu(vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmax_vs_f64m8_f64m1_tu(vfloat64m1_t vd, vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f64m8_f64m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16mf4_f16m1_tum(vbool64_t vm, vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16mf4_f16m1_tum(vbool64_t vm, vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16mf4_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16mf2_f16m1_tum(vbool32_t vm, vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16mf2_f16m1_tum(vbool32_t vm, vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16mf2_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16m1_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16m1_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16m1_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16m2_f16m1_tum(vbool8_t vm, vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16m2_f16m1_tum(vbool8_t vm, vfloat16m1_t vd, + vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16m2_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16m4_f16m1_tum(vbool4_t vm, vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16m4_f16m1_tum(vbool4_t vm, vfloat16m1_t vd, + vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16m4_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmax_vs_f16m8_f16m1_tum(vbool2_t vm, vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmax_vs_f16m8_f16m1_tum(vbool2_t vm, vfloat16m1_t vd, + vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f16m8_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32mf2_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32mf2_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32mf2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32m1_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32m1_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32m1_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32m2_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32m2_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, + vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32m2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32m4_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32m4_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32m4_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmax_vs_f32m8_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmax_vs_f32m8_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f32m8_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmax_vs_f64m1_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmax_vs_f64m1_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f64m1_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmax_vs_f64m2_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmax_vs_f64m2_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, + vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f64m2_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmax_vs_f64m4_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmax_vs_f64m4_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, + vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f64m4_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmax_vs_f64m8_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmax_vs_f64m8_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmax_vs_f64m8_f64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfredmin.c b/auto-generated/policy_funcs/llvm-api-tests/vfredmin.c index 1339dc706..8d68be880 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfredmin.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfredmin.c @@ -6,122 +6,170 @@ #include -vfloat16m1_t test_vfredmin_vs_f16mf4_f16m1_tu(vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16mf4_f16m1_tu(vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16mf4_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16mf2_f16m1_tu(vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16mf2_f16m1_tu(vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16mf2_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16m1_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16m1_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16m1_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16m2_f16m1_tu(vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16m2_f16m1_tu(vfloat16m1_t vd, vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16m2_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16m4_f16m1_tu(vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16m4_f16m1_tu(vfloat16m1_t vd, vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16m4_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16m8_f16m1_tu(vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16m8_f16m1_tu(vfloat16m1_t vd, vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16m8_f16m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32mf2_f32m1_tu(vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32mf2_f32m1_tu(vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32mf2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32m1_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32m1_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32m1_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32m2_f32m1_tu(vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32m2_f32m1_tu(vfloat32m1_t vd, vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32m2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32m4_f32m1_tu(vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32m4_f32m1_tu(vfloat32m1_t vd, vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32m4_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32m8_f32m1_tu(vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32m8_f32m1_tu(vfloat32m1_t vd, vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32m8_f32m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmin_vs_f64m1_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmin_vs_f64m1_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f64m1_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmin_vs_f64m2_f64m1_tu(vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmin_vs_f64m2_f64m1_tu(vfloat64m1_t vd, vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f64m2_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmin_vs_f64m4_f64m1_tu(vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmin_vs_f64m4_f64m1_tu(vfloat64m1_t vd, vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f64m4_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmin_vs_f64m8_f64m1_tu(vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmin_vs_f64m8_f64m1_tu(vfloat64m1_t vd, vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f64m8_f64m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16mf4_f16m1_tum(vbool64_t vm, vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16mf4_f16m1_tum(vbool64_t vm, vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16mf4_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16mf2_f16m1_tum(vbool32_t vm, vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16mf2_f16m1_tum(vbool32_t vm, vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16mf2_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16m1_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16m1_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16m1_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16m2_f16m1_tum(vbool8_t vm, vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16m2_f16m1_tum(vbool8_t vm, vfloat16m1_t vd, + vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16m2_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16m4_f16m1_tum(vbool4_t vm, vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16m4_f16m1_tum(vbool4_t vm, vfloat16m1_t vd, + vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16m4_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredmin_vs_f16m8_f16m1_tum(vbool2_t vm, vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredmin_vs_f16m8_f16m1_tum(vbool2_t vm, vfloat16m1_t vd, + vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f16m8_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32mf2_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32mf2_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32mf2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32m1_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32m1_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32m1_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32m2_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32m2_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, + vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32m2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32m4_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32m4_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32m4_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredmin_vs_f32m8_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredmin_vs_f32m8_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f32m8_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmin_vs_f64m1_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmin_vs_f64m1_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f64m1_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmin_vs_f64m2_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmin_vs_f64m2_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, + vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f64m2_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmin_vs_f64m4_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmin_vs_f64m4_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, + vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f64m4_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredmin_vs_f64m8_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredmin_vs_f64m8_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredmin_vs_f64m8_f64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfredosum.c b/auto-generated/policy_funcs/llvm-api-tests/vfredosum.c index ea5d417fe..436e01f24 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfredosum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfredosum.c @@ -6,242 +6,386 @@ #include -vfloat16m1_t test_vfredosum_vs_f16mf4_f16m1_tu(vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16mf4_f16m1_tu(vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16mf4_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16mf2_f16m1_tu(vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16mf2_f16m1_tu(vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16mf2_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16m1_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16m1_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16m1_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16m2_f16m1_tu(vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16m2_f16m1_tu(vfloat16m1_t vd, vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16m2_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16m4_f16m1_tu(vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16m4_f16m1_tu(vfloat16m1_t vd, vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16m4_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16m8_f16m1_tu(vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16m8_f16m1_tu(vfloat16m1_t vd, vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16m8_f16m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32mf2_f32m1_tu(vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32mf2_f32m1_tu(vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32mf2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32m1_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32m1_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32m1_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32m2_f32m1_tu(vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32m2_f32m1_tu(vfloat32m1_t vd, vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32m2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32m4_f32m1_tu(vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32m4_f32m1_tu(vfloat32m1_t vd, vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32m4_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32m8_f32m1_tu(vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32m8_f32m1_tu(vfloat32m1_t vd, vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32m8_f32m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredosum_vs_f64m1_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredosum_vs_f64m1_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f64m1_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredosum_vs_f64m2_f64m1_tu(vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredosum_vs_f64m2_f64m1_tu(vfloat64m1_t vd, vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f64m2_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredosum_vs_f64m4_f64m1_tu(vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredosum_vs_f64m4_f64m1_tu(vfloat64m1_t vd, vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f64m4_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredosum_vs_f64m8_f64m1_tu(vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredosum_vs_f64m8_f64m1_tu(vfloat64m1_t vd, vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f64m8_f64m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16mf4_f16m1_tum(vbool64_t vm, vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16mf4_f16m1_tum(vbool64_t vm, vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16mf4_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16mf2_f16m1_tum(vbool32_t vm, vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16mf2_f16m1_tum(vbool32_t vm, vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16mf2_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16m1_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16m1_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16m1_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16m2_f16m1_tum(vbool8_t vm, vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16m2_f16m1_tum(vbool8_t vm, vfloat16m1_t vd, + vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16m2_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16m4_f16m1_tum(vbool4_t vm, vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16m4_f16m1_tum(vbool4_t vm, vfloat16m1_t vd, + vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16m4_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16m8_f16m1_tum(vbool2_t vm, vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredosum_vs_f16m8_f16m1_tum(vbool2_t vm, vfloat16m1_t vd, + vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f16m8_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32mf2_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32mf2_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32mf2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32m1_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32m1_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32m1_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32m2_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32m2_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, + vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32m2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32m4_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32m4_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32m4_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredosum_vs_f32m8_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredosum_vs_f32m8_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f32m8_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredosum_vs_f64m1_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredosum_vs_f64m1_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f64m1_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredosum_vs_f64m2_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredosum_vs_f64m2_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, + vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f64m2_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredosum_vs_f64m4_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredosum_vs_f64m4_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, + vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f64m4_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredosum_vs_f64m8_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredosum_vs_f64m8_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredosum_vs_f64m8_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredosum_vs_f16mf4_f16m1_rm_tu(vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16mf4_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16mf4_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16mf4_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredosum_vs_f16mf2_f16m1_rm_tu(vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16mf2_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16mf2_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16mf2_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredosum_vs_f16m1_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16m1_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16m1_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16m1_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredosum_vs_f16m2_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16m2_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16m2_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16m2_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredosum_vs_f16m4_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16m4_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16m4_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16m4_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredosum_vs_f16m8_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16m8_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16m8_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16m8_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredosum_vs_f32mf2_f32m1_rm_tu(vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32mf2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32mf2_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32mf2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredosum_vs_f32m1_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32m1_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32m1_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32m1_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredosum_vs_f32m2_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32m2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32m2_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32m2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredosum_vs_f32m4_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32m4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32m4_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32m4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredosum_vs_f32m8_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32m8_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32m8_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32m8_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfredosum_vs_f64m1_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f64m1_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredosum_vs_f64m1_f64m1_rm_tu(vfloat64m1_t vd, + vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f64m1_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfredosum_vs_f64m2_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f64m2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredosum_vs_f64m2_f64m1_rm_tu(vfloat64m1_t vd, + vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f64m2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfredosum_vs_f64m4_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f64m4_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredosum_vs_f64m4_f64m1_rm_tu(vfloat64m1_t vd, + vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f64m4_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfredosum_vs_f64m8_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f64m8_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredosum_vs_f64m8_f64m1_rm_tu(vfloat64m1_t vd, + vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f64m8_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredosum_vs_f16mf4_f16m1_rm_tum(vbool64_t vm, vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16mf4_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16mf4_f16m1_rm_tum(vbool64_t vm, + vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, + size_t vl) { + return __riscv_vfredosum_vs_f16mf4_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredosum_vs_f16mf2_f16m1_rm_tum(vbool32_t vm, vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16mf2_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16mf2_f16m1_rm_tum(vbool32_t vm, + vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, + size_t vl) { + return __riscv_vfredosum_vs_f16mf2_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredosum_vs_f16m1_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16m1_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16m1_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16m1_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredosum_vs_f16m2_f16m1_rm_tum(vbool8_t vm, vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16m2_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16m2_f16m1_rm_tum(vbool8_t vm, vfloat16m1_t vd, + vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16m2_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredosum_vs_f16m4_f16m1_rm_tum(vbool4_t vm, vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16m4_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16m4_f16m1_rm_tum(vbool4_t vm, vfloat16m1_t vd, + vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16m4_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredosum_vs_f16m8_f16m1_rm_tum(vbool2_t vm, vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f16m8_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredosum_vs_f16m8_f16m1_rm_tum(vbool2_t vm, vfloat16m1_t vd, + vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f16m8_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredosum_vs_f32mf2_f32m1_rm_tum(vbool64_t vm, vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32mf2_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32mf2_f32m1_rm_tum(vbool64_t vm, + vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfredosum_vs_f32mf2_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredosum_vs_f32m1_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32m1_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32m1_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32m1_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredosum_vs_f32m2_f32m1_rm_tum(vbool16_t vm, vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32m2_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32m2_f32m1_rm_tum(vbool16_t vm, vfloat32m1_t vd, + vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32m2_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredosum_vs_f32m4_f32m1_rm_tum(vbool8_t vm, vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32m4_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32m4_f32m1_rm_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32m4_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredosum_vs_f32m8_f32m1_rm_tum(vbool4_t vm, vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f32m8_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredosum_vs_f32m8_f32m1_rm_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f32m8_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfredosum_vs_f64m1_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f64m1_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredosum_vs_f64m1_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f64m1_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfredosum_vs_f64m2_f64m1_rm_tum(vbool32_t vm, vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f64m2_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredosum_vs_f64m2_f64m1_rm_tum(vbool32_t vm, vfloat64m1_t vd, + vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f64m2_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfredosum_vs_f64m4_f64m1_rm_tum(vbool16_t vm, vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f64m4_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredosum_vs_f64m4_f64m1_rm_tum(vbool16_t vm, vfloat64m1_t vd, + vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f64m4_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfredosum_vs_f64m8_f64m1_rm_tum(vbool8_t vm, vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredosum_vs_f64m8_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredosum_vs_f64m8_f64m1_rm_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredosum_vs_f64m8_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfredusum.c b/auto-generated/policy_funcs/llvm-api-tests/vfredusum.c index a06a976d6..43f54df5f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfredusum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfredusum.c @@ -6,242 +6,386 @@ #include -vfloat16m1_t test_vfredusum_vs_f16mf4_f16m1_tu(vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16mf4_f16m1_tu(vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16mf4_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16mf2_f16m1_tu(vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16mf2_f16m1_tu(vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16mf2_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16m1_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16m1_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16m1_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16m2_f16m1_tu(vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16m2_f16m1_tu(vfloat16m1_t vd, vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16m2_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16m4_f16m1_tu(vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16m4_f16m1_tu(vfloat16m1_t vd, vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16m4_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16m8_f16m1_tu(vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16m8_f16m1_tu(vfloat16m1_t vd, vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16m8_f16m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32mf2_f32m1_tu(vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32mf2_f32m1_tu(vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32mf2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32m1_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32m1_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32m1_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32m2_f32m1_tu(vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32m2_f32m1_tu(vfloat32m1_t vd, vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32m2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32m4_f32m1_tu(vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32m4_f32m1_tu(vfloat32m1_t vd, vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32m4_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32m8_f32m1_tu(vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32m8_f32m1_tu(vfloat32m1_t vd, vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32m8_f32m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredusum_vs_f64m1_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredusum_vs_f64m1_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f64m1_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredusum_vs_f64m2_f64m1_tu(vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredusum_vs_f64m2_f64m1_tu(vfloat64m1_t vd, vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f64m2_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredusum_vs_f64m4_f64m1_tu(vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredusum_vs_f64m4_f64m1_tu(vfloat64m1_t vd, vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f64m4_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredusum_vs_f64m8_f64m1_tu(vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredusum_vs_f64m8_f64m1_tu(vfloat64m1_t vd, vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f64m8_f64m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16mf4_f16m1_tum(vbool64_t vm, vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16mf4_f16m1_tum(vbool64_t vm, vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16mf4_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16mf2_f16m1_tum(vbool32_t vm, vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16mf2_f16m1_tum(vbool32_t vm, vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16mf2_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16m1_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16m1_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16m1_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16m2_f16m1_tum(vbool8_t vm, vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16m2_f16m1_tum(vbool8_t vm, vfloat16m1_t vd, + vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16m2_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16m4_f16m1_tum(vbool4_t vm, vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16m4_f16m1_tum(vbool4_t vm, vfloat16m1_t vd, + vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16m4_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16m8_f16m1_tum(vbool2_t vm, vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfredusum_vs_f16m8_f16m1_tum(vbool2_t vm, vfloat16m1_t vd, + vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f16m8_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32mf2_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32mf2_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32mf2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32m1_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32m1_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32m1_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32m2_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32m2_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, + vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32m2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32m4_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32m4_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32m4_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfredusum_vs_f32m8_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfredusum_vs_f32m8_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f32m8_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredusum_vs_f64m1_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredusum_vs_f64m1_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f64m1_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredusum_vs_f64m2_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredusum_vs_f64m2_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, + vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f64m2_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredusum_vs_f64m4_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredusum_vs_f64m4_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, + vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f64m4_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfredusum_vs_f64m8_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfredusum_vs_f64m8_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfredusum_vs_f64m8_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfredusum_vs_f16mf4_f16m1_rm_tu(vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16mf4_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16mf4_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16mf4_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredusum_vs_f16mf2_f16m1_rm_tu(vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16mf2_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16mf2_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16mf2_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredusum_vs_f16m1_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16m1_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16m1_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16m1_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredusum_vs_f16m2_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16m2_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16m2_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16m2_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredusum_vs_f16m4_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16m4_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16m4_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16m4_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredusum_vs_f16m8_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16m8_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16m8_f16m1_rm_tu(vfloat16m1_t vd, + vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16m8_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredusum_vs_f32mf2_f32m1_rm_tu(vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32mf2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32mf2_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32mf2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredusum_vs_f32m1_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32m1_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32m1_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32m1_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredusum_vs_f32m2_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32m2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32m2_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32m2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredusum_vs_f32m4_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32m4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32m4_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32m4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfredusum_vs_f32m8_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32m8_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32m8_f32m1_rm_tu(vfloat32m1_t vd, + vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32m8_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfredusum_vs_f64m1_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f64m1_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredusum_vs_f64m1_f64m1_rm_tu(vfloat64m1_t vd, + vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f64m1_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfredusum_vs_f64m2_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f64m2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredusum_vs_f64m2_f64m1_rm_tu(vfloat64m1_t vd, + vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f64m2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfredusum_vs_f64m4_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f64m4_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredusum_vs_f64m4_f64m1_rm_tu(vfloat64m1_t vd, + vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f64m4_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfredusum_vs_f64m8_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f64m8_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredusum_vs_f64m8_f64m1_rm_tu(vfloat64m1_t vd, + vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f64m8_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfredusum_vs_f16mf4_f16m1_rm_tum(vbool64_t vm, vfloat16m1_t vd, vfloat16mf4_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16mf4_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16mf4_f16m1_rm_tum(vbool64_t vm, + vfloat16m1_t vd, + vfloat16mf4_t vs2, + vfloat16m1_t vs1, + size_t vl) { + return __riscv_vfredusum_vs_f16mf4_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredusum_vs_f16mf2_f16m1_rm_tum(vbool32_t vm, vfloat16m1_t vd, vfloat16mf2_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16mf2_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16mf2_f16m1_rm_tum(vbool32_t vm, + vfloat16m1_t vd, + vfloat16mf2_t vs2, + vfloat16m1_t vs1, + size_t vl) { + return __riscv_vfredusum_vs_f16mf2_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredusum_vs_f16m1_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16m1_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16m1_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16m1_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredusum_vs_f16m2_f16m1_rm_tum(vbool8_t vm, vfloat16m1_t vd, vfloat16m2_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16m2_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16m2_f16m1_rm_tum(vbool8_t vm, vfloat16m1_t vd, + vfloat16m2_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16m2_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredusum_vs_f16m4_f16m1_rm_tum(vbool4_t vm, vfloat16m1_t vd, vfloat16m4_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16m4_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16m4_f16m1_rm_tum(vbool4_t vm, vfloat16m1_t vd, + vfloat16m4_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16m4_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfredusum_vs_f16m8_f16m1_rm_tum(vbool2_t vm, vfloat16m1_t vd, vfloat16m8_t vs2, vfloat16m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f16m8_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat16m1_t test_vfredusum_vs_f16m8_f16m1_rm_tum(vbool2_t vm, vfloat16m1_t vd, + vfloat16m8_t vs2, + vfloat16m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f16m8_f16m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredusum_vs_f32mf2_f32m1_rm_tum(vbool64_t vm, vfloat32m1_t vd, vfloat32mf2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32mf2_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32mf2_f32m1_rm_tum(vbool64_t vm, + vfloat32m1_t vd, + vfloat32mf2_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfredusum_vs_f32mf2_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredusum_vs_f32m1_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32m1_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32m1_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32m1_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredusum_vs_f32m2_f32m1_rm_tum(vbool16_t vm, vfloat32m1_t vd, vfloat32m2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32m2_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32m2_f32m1_rm_tum(vbool16_t vm, vfloat32m1_t vd, + vfloat32m2_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32m2_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredusum_vs_f32m4_f32m1_rm_tum(vbool8_t vm, vfloat32m1_t vd, vfloat32m4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32m4_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32m4_f32m1_rm_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat32m4_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32m4_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfredusum_vs_f32m8_f32m1_rm_tum(vbool4_t vm, vfloat32m1_t vd, vfloat32m8_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f32m8_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfredusum_vs_f32m8_f32m1_rm_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat32m8_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f32m8_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfredusum_vs_f64m1_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f64m1_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredusum_vs_f64m1_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f64m1_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfredusum_vs_f64m2_f64m1_rm_tum(vbool32_t vm, vfloat64m1_t vd, vfloat64m2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f64m2_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredusum_vs_f64m2_f64m1_rm_tum(vbool32_t vm, vfloat64m1_t vd, + vfloat64m2_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f64m2_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfredusum_vs_f64m4_f64m1_rm_tum(vbool16_t vm, vfloat64m1_t vd, vfloat64m4_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f64m4_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredusum_vs_f64m4_f64m1_rm_tum(vbool16_t vm, vfloat64m1_t vd, + vfloat64m4_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f64m4_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfredusum_vs_f64m8_f64m1_rm_tum(vbool8_t vm, vfloat64m1_t vd, vfloat64m8_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfredusum_vs_f64m8_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfredusum_vs_f64m8_f64m1_rm_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat64m8_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfredusum_vs_f64m8_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c b/auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c index d3cf59cb6..2f74a970c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c @@ -6,242 +6,302 @@ #include -vfloat16mf4_t test_vfrsqrt7_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrsqrt7_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfrsqrt7_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrsqrt7_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfrsqrt7_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrsqrt7_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfrsqrt7_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrsqrt7_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfrsqrt7_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrsqrt7_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f16m4_tu(vd, vs2, vl); } -vfloat16m8_t test_vfrsqrt7_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrsqrt7_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f16m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfrsqrt7_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrsqrt7_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfrsqrt7_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrsqrt7_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfrsqrt7_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrsqrt7_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfrsqrt7_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrsqrt7_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfrsqrt7_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrsqrt7_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f32m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfrsqrt7_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrsqrt7_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfrsqrt7_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrsqrt7_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfrsqrt7_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrsqrt7_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfrsqrt7_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrsqrt7_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfrsqrt7_v_f64m8_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfrsqrt7_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrsqrt7_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfrsqrt7_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrsqrt7_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfrsqrt7_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrsqrt7_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfrsqrt7_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrsqrt7_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfrsqrt7_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrsqrt7_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m4_tum(vm, vd, vs2, vl); } -vfloat16m8_t test_vfrsqrt7_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrsqrt7_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfrsqrt7_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrsqrt7_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfrsqrt7_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrsqrt7_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfrsqrt7_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrsqrt7_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfrsqrt7_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrsqrt7_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfrsqrt7_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrsqrt7_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfrsqrt7_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrsqrt7_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfrsqrt7_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrsqrt7_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfrsqrt7_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrsqrt7_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfrsqrt7_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrsqrt7_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m8_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfrsqrt7_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrsqrt7_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfrsqrt7_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrsqrt7_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfrsqrt7_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrsqrt7_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfrsqrt7_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrsqrt7_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfrsqrt7_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrsqrt7_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfrsqrt7_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrsqrt7_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfrsqrt7_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrsqrt7_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfrsqrt7_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrsqrt7_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfrsqrt7_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrsqrt7_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfrsqrt7_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrsqrt7_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfrsqrt7_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrsqrt7_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfrsqrt7_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrsqrt7_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfrsqrt7_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrsqrt7_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfrsqrt7_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrsqrt7_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfrsqrt7_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrsqrt7_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m8_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfrsqrt7_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfrsqrt7_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfrsqrt7_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfrsqrt7_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfrsqrt7_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfrsqrt7_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfrsqrt7_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfrsqrt7_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfrsqrt7_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfrsqrt7_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m4_mu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfrsqrt7_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfrsqrt7_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f16m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfrsqrt7_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfrsqrt7_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfrsqrt7_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfrsqrt7_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfrsqrt7_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfrsqrt7_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfrsqrt7_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfrsqrt7_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfrsqrt7_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfrsqrt7_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f32m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfrsqrt7_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfrsqrt7_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfrsqrt7_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfrsqrt7_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfrsqrt7_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfrsqrt7_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfrsqrt7_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfrsqrt7_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfrsqrt7_v_f64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfrsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfrsub.c index 4d4f7412e..5f1c1ded8 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfrsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfrsub.c @@ -6,482 +6,675 @@ #include -vfloat16mf4_t test_vfrsub_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrsub_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfrsub_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrsub_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfrsub_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrsub_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfrsub_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrsub_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfrsub_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrsub_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfrsub_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrsub_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfrsub_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrsub_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfrsub_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrsub_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfrsub_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrsub_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfrsub_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrsub_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfrsub_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrsub_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfrsub_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrsub_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfrsub_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrsub_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfrsub_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrsub_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfrsub_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrsub_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfrsub_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrsub_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfrsub_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrsub_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfrsub_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrsub_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfrsub_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrsub_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfrsub_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrsub_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfrsub_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrsub_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfrsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfrsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfrsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfrsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfrsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfrsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfrsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfrsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfrsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfrsub_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrsub_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfrsub_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrsub_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfrsub_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrsub_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfrsub_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrsub_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfrsub_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrsub_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfrsub_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrsub_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfrsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfrsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfrsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfrsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfrsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfrsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfrsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfrsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfrsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfrsub_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrsub_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfrsub_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrsub_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfrsub_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrsub_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfrsub_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrsub_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfrsub_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrsub_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfrsub_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrsub_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfrsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfrsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfrsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfrsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfrsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfrsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfrsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfrsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfrsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfrsub_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrsub_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16mf4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrsub_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrsub_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrsub_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrsub_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrsub_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrsub_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrsub_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrsub_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrsub_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrsub_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfrsub_vf_f16m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrsub_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrsub_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrsub_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrsub_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrsub_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrsub_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrsub_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrsub_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfrsub_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrsub_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrsub_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrsub_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrsub_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrsub_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrsub_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrsub_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrsub_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfrsub_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrsub_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrsub_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrsub_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrsub_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrsub_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrsub_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrsub_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrsub_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrsub_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrsub_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrsub_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrsub_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrsub_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfrsub_vf_f16mf4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat16mf4_t test_vfrsub_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfrsub_vf_f16mf4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat16mf2_t test_vfrsub_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfrsub_vf_f16mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat16mf2_t test_vfrsub_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfrsub_vf_f16mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat16m1_t test_vfrsub_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrsub_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrsub_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrsub_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrsub_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrsub_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrsub_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrsub_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { - return __riscv_vfrsub_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfrsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { + return __riscv_vfrsub_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfrsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfrsub_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfrsub_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfrsub_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfrsub_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfrsub_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfrsub_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfrsub_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfrsub_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfrsub_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfrsub_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfrsub_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfrsub_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfrsub_vf_f16m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfrsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfrsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfrsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfrsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfrsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfrsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfrsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfrsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfrsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfrsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfrsub_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfrsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfrsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfrsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfrsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfrsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfrsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfrsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfrsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfrsub_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c b/auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c index ad234465b..926e144d0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c @@ -6,482 +6,672 @@ #include -vfloat16mf4_t test_vfsgnj_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnj_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnj_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnj_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnj_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnj_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnj_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnj_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnj_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnj_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnj_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnj_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnj_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnj_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnj_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnj_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnj_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnj_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnj_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnj_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnj_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnj_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnj_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnj_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnj_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnj_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnj_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnj_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnj_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnj_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnj_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnj_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnj_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnj_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnj_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnj_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnj_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnj_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnj_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnj_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnj_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnj_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnj_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnj_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnj_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnj_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnj_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnj_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnj_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnj_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnj_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnj_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnj_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnj_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnj_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnj_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnj_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnj_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnj_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnj_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnj_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnj_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnj_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnj_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfsgnj_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnj_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnj_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnj_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnj_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnj_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnj_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnj_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnj_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnj_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnj_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnj_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnj_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnj_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnj_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnj_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnj_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnj_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnj_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnj_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnj_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnj_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnj_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnj_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnj_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnj_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnj_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnj_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnj_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnj_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnj_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnj_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnj_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnj_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnj_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnj_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnj_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnj_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnj_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnj_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnj_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnj_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnj_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnj_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnj_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnj_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnj_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnj_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnj_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnj_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnj_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnj_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnj_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnj_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnj_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnj_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnj_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnj_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnj_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnj_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnj_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnj_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnj_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnj_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnj_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnj_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnj_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnj_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnj_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnj_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnj_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnj_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnj_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnj_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnj_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnj_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnj_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnj_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnj_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnj_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnj_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnj_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnj_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnj_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnj_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnj_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnj_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnj_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnj_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnj_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnj_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnj_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnj_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnj_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnj_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnj_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnj_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnj_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnj_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnj_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnj_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnj_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnj_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnj_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnj_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnj_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnj_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnj_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnj_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnj_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnj_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnj_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnj_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnj_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnj_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnj_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnj_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnj_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnj_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnj_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnj_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnj_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnj_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnj_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnj_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnj_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnj_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnj_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnj_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnj_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnj_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnj_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnj_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnj_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnj_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnj_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnj_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnj_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnj_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnj_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnj_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnj_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnj_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnj_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnj_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnj_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnj_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnj_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnj_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnj_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnj_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnj_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnj_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnj_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnj_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnj_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnj_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnj_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnj_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnj_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnj_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnj_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnj_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnj_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnj_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnj_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnj_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnj_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnj_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnj_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnj_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnj_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnj_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnj_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnj_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsgnj_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnj_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnj_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnj_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnj_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnj_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnj_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnj_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnj_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnj_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnj_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnj_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnj_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnj_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnj_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnj_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnj_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnj_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfsgnj_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c b/auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c index 2bfd70a74..d1cb37e31 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c @@ -6,482 +6,680 @@ #include -vfloat16mf4_t test_vfsgnjn_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnjn_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnjn_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnjn_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjn_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnjn_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnjn_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnjn_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnjn_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjn_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnjn_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnjn_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnjn_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnjn_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjn_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnjn_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnjn_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnjn_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnjn_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjn_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnjn_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnjn_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnjn_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnjn_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjn_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnjn_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnjn_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnjn_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnjn_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjn_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnjn_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnjn_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnjn_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnjn_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnjn_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnjn_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnjn_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnjn_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnjn_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnjn_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnjn_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnjn_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnjn_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnjn_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnjn_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnjn_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnjn_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnjn_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnjn_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnjn_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnjn_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnjn_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnjn_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnjn_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnjn_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnjn_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnjn_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnjn_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnjn_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnjn_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnjn_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnjn_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnjn_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnjn_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnjn_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnjn_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnjn_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfsgnjn_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnjn_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnjn_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnjn_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnjn_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnjn_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnjn_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnjn_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnjn_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnjn_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnjn_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnjn_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnjn_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnjn_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnjn_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnjn_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnjn_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnjn_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnjn_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnjn_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnjn_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnjn_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnjn_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnjn_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnjn_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnjn_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnjn_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnjn_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnjn_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnjn_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnjn_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnjn_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnjn_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnjn_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnjn_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnjn_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnjn_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnjn_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnjn_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnjn_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnjn_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnjn_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnjn_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnjn_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnjn_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnjn_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnjn_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnjn_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnjn_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnjn_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnjn_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnjn_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnjn_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnjn_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnjn_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnjn_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnjn_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnjn_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnjn_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnjn_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnjn_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnjn_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnjn_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnjn_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnjn_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnjn_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnjn_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnjn_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnjn_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnjn_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnjn_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnjn_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnjn_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnjn_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnjn_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnjn_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnjn_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnjn_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnjn_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnjn_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnjn_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnjn_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnjn_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnjn_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnjn_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnjn_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnjn_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnjn_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnjn_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnjn_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnjn_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnjn_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnjn_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnjn_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnjn_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnjn_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnjn_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnjn_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnjn_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnjn_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnjn_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnjn_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnjn_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnjn_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnjn_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnjn_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnjn_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnjn_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnjn_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnjn_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnjn_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnjn_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnjn_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnjn_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnjn_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnjn_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnjn_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnjn_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnjn_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnjn_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnjn_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnjn_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnjn_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnjn_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnjn_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnjn_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnjn_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnjn_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnjn_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnjn_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnjn_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnjn_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnjn_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnjn_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnjn_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnjn_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnjn_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnjn_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnjn_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnjn_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnjn_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnjn_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnjn_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnjn_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnjn_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnjn_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnjn_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnjn_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnjn_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnjn_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnjn_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnjn_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjn_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnjn_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnjn_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnjn_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnjn_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnjn_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnjn_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnjn_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnjn_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnjn_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnjn_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnjn_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnjn_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnjn_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnjn_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnjn_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnjn_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjn_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnjn_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnjn_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnjn_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnjn_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfsgnjn_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnjn_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnjn_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnjn_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnjn_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfsgnjn_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnjn_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnjn_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnjn_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnjn_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfsgnjn_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnjn_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnjn_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnjn_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnjn_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnjn_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfsgnjn_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c b/auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c index 581eb128b..63e290da2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c @@ -6,482 +6,680 @@ #include -vfloat16mf4_t test_vfsgnjx_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnjx_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnjx_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnjx_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjx_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnjx_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnjx_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnjx_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnjx_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjx_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnjx_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnjx_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnjx_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnjx_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjx_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnjx_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnjx_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnjx_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnjx_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjx_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnjx_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnjx_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnjx_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnjx_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjx_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnjx_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnjx_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnjx_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnjx_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsgnjx_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnjx_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnjx_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnjx_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnjx_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnjx_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnjx_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnjx_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnjx_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnjx_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnjx_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnjx_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnjx_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnjx_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnjx_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnjx_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnjx_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnjx_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnjx_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnjx_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnjx_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnjx_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnjx_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnjx_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnjx_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnjx_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnjx_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnjx_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnjx_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnjx_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnjx_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnjx_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnjx_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnjx_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnjx_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnjx_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnjx_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnjx_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfsgnjx_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnjx_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnjx_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfsgnjx_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnjx_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnjx_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnjx_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnjx_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnjx_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnjx_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnjx_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnjx_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnjx_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnjx_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnjx_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnjx_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnjx_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnjx_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnjx_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnjx_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnjx_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnjx_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnjx_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnjx_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnjx_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnjx_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnjx_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnjx_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnjx_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnjx_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnjx_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnjx_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnjx_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnjx_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnjx_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnjx_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnjx_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnjx_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnjx_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnjx_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnjx_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnjx_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnjx_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnjx_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnjx_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnjx_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnjx_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnjx_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnjx_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnjx_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnjx_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnjx_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnjx_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnjx_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnjx_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnjx_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnjx_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnjx_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnjx_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnjx_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnjx_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnjx_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnjx_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnjx_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnjx_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnjx_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnjx_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnjx_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnjx_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnjx_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnjx_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnjx_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnjx_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnjx_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnjx_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnjx_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnjx_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnjx_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnjx_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnjx_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnjx_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnjx_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnjx_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnjx_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnjx_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnjx_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnjx_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnjx_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnjx_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnjx_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnjx_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnjx_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnjx_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnjx_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnjx_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnjx_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnjx_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnjx_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnjx_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnjx_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnjx_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnjx_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnjx_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnjx_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnjx_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnjx_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnjx_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnjx_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnjx_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnjx_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnjx_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnjx_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnjx_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnjx_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnjx_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnjx_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnjx_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnjx_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnjx_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnjx_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnjx_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnjx_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnjx_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnjx_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsgnjx_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsgnjx_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsgnjx_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsgnjx_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsgnjx_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsgnjx_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsgnjx_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsgnjx_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsgnjx_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsgnjx_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsgnjx_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsgnjx_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsgnjx_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsgnjx_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsgnjx_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsgnjx_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsgnjx_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsgnjx_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsgnjx_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsgnjx_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsgnjx_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsgnjx_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsgnjx_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsgnjx_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsgnjx_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsgnjx_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsgnjx_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsgnjx_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsgnjx_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsgnjx_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsgnjx_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsgnjx_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsgnjx_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsgnjx_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsgnjx_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsgnjx_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsgnjx_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsgnjx_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsgnjx_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsgnjx_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsgnjx_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsgnjx_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsgnjx_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsgnjx_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsgnjx_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsgnjx_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsgnjx_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsgnjx_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsgnjx_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsgnjx_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfsgnjx_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsgnjx_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsgnjx_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsgnjx_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsgnjx_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfsgnjx_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsgnjx_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsgnjx_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsgnjx_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsgnjx_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfsgnjx_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsgnjx_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsgnjx_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsgnjx_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsgnjx_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsgnjx_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfsgnjx_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c b/auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c index efddd128a..c074c1c6f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c @@ -6,242 +6,350 @@ #include -vfloat16mf4_t test_vfslide1down_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfslide1down_vf_f16mf4_tu(vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfslide1down_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfslide1down_vf_f16mf2_tu(vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfslide1down_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfslide1down_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1down_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfslide1down_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfslide1down_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1down_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfslide1down_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfslide1down_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1down_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfslide1down_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfslide1down_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1down_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfslide1down_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfslide1down_vf_f32mf2_tu(vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfslide1down_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfslide1down_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1down_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfslide1down_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfslide1down_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1down_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfslide1down_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfslide1down_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1down_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfslide1down_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfslide1down_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1down_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfslide1down_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfslide1down_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfslide1down_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfslide1down_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfslide1down_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfslide1down_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfslide1down_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfslide1down_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfslide1down_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfslide1down_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfslide1down_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfslide1down_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfslide1down_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfslide1down_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfslide1down_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfslide1down_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfslide1down_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfslide1down_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfslide1down_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfslide1down_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfslide1down_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfslide1down_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfslide1down_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfslide1down_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfslide1down_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfslide1down_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfslide1down_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfslide1down_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfslide1down_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfslide1down_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfslide1down_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfslide1down_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfslide1down_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfslide1down_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfslide1down_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfslide1down_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfslide1down_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfslide1down_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfslide1down_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfslide1down_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfslide1down_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfslide1down_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfslide1down_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfslide1down_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfslide1down_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfslide1down_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfslide1down_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfslide1down_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfslide1down_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfslide1down_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfslide1down_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfslide1down_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfslide1down_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfslide1down_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfslide1down_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfslide1down_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfslide1down_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfslide1down_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfslide1down_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfslide1down_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfslide1down_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfslide1down_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfslide1down_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfslide1down_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfslide1down_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfslide1down_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfslide1down_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfslide1down_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfslide1down_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfslide1down_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfslide1down_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfslide1down_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfslide1down_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfslide1down_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfslide1down_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfslide1down_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfslide1down_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfslide1down_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfslide1down_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfslide1down_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfslide1down_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfslide1down_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfslide1down_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfslide1down_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1down_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfslide1down_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfslide1down_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfslide1down_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfslide1down_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfslide1down_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfslide1down_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfslide1down_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfslide1down_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfslide1down_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfslide1down_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1down_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfslide1down_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfslide1down_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfslide1down_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfslide1down_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfslide1down_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfslide1down_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfslide1down_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfslide1down_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1down_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c b/auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c index b4b45e6fd..bc2af8201 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c @@ -6,242 +6,347 @@ #include -vfloat16mf4_t test_vfslide1up_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfslide1up_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1up_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfslide1up_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfslide1up_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1up_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfslide1up_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfslide1up_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1up_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfslide1up_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfslide1up_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1up_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfslide1up_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfslide1up_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1up_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfslide1up_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfslide1up_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfslide1up_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfslide1up_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfslide1up_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1up_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfslide1up_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfslide1up_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1up_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfslide1up_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfslide1up_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1up_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfslide1up_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfslide1up_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1up_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfslide1up_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfslide1up_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfslide1up_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfslide1up_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfslide1up_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfslide1up_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfslide1up_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfslide1up_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfslide1up_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfslide1up_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfslide1up_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfslide1up_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfslide1up_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfslide1up_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfslide1up_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfslide1up_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfslide1up_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfslide1up_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfslide1up_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfslide1up_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfslide1up_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfslide1up_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfslide1up_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfslide1up_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfslide1up_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfslide1up_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfslide1up_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfslide1up_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfslide1up_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfslide1up_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfslide1up_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfslide1up_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfslide1up_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfslide1up_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfslide1up_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfslide1up_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfslide1up_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfslide1up_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfslide1up_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfslide1up_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfslide1up_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfslide1up_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfslide1up_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfslide1up_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfslide1up_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfslide1up_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfslide1up_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfslide1up_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfslide1up_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfslide1up_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfslide1up_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfslide1up_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfslide1up_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfslide1up_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfslide1up_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfslide1up_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfslide1up_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfslide1up_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfslide1up_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfslide1up_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfslide1up_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfslide1up_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfslide1up_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfslide1up_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfslide1up_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfslide1up_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfslide1up_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfslide1up_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfslide1up_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfslide1up_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfslide1up_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfslide1up_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfslide1up_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfslide1up_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfslide1up_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfslide1up_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfslide1up_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfslide1up_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfslide1up_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfslide1up_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfslide1up_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfslide1up_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfslide1up_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfslide1up_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfslide1up_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfslide1up_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfslide1up_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfslide1up_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfslide1up_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfslide1up_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfslide1up_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfslide1up_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfslide1up_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfslide1up_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfslide1up_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfslide1up_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfslide1up_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfslide1up_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfslide1up_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfslide1up_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfslide1up_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfslide1up_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfslide1up_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfslide1up_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfslide1up_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfslide1up_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfslide1up_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfslide1up_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c b/auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c index a9c28b989..88b1d0760 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c @@ -6,482 +6,602 @@ #include -vfloat16mf4_t test_vfsqrt_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfsqrt_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfsqrt_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfsqrt_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfsqrt_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfsqrt_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfsqrt_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfsqrt_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfsqrt_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfsqrt_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16m4_tu(vd, vs2, vl); } -vfloat16m8_t test_vfsqrt_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfsqrt_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfsqrt_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfsqrt_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfsqrt_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfsqrt_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfsqrt_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfsqrt_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfsqrt_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfsqrt_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfsqrt_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfsqrt_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfsqrt_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfsqrt_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfsqrt_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfsqrt_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfsqrt_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfsqrt_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfsqrt_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfsqrt_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f64m8_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfsqrt_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfsqrt_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfsqrt_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfsqrt_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfsqrt_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfsqrt_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfsqrt_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfsqrt_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfsqrt_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfsqrt_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m4_tum(vm, vd, vs2, vl); } -vfloat16m8_t test_vfsqrt_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfsqrt_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfsqrt_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfsqrt_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfsqrt_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfsqrt_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfsqrt_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfsqrt_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfsqrt_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfsqrt_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfsqrt_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfsqrt_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfsqrt_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfsqrt_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfsqrt_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfsqrt_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfsqrt_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfsqrt_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfsqrt_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfsqrt_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m8_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfsqrt_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfsqrt_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfsqrt_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfsqrt_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfsqrt_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfsqrt_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfsqrt_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfsqrt_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfsqrt_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfsqrt_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfsqrt_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfsqrt_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfsqrt_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfsqrt_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfsqrt_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfsqrt_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfsqrt_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfsqrt_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfsqrt_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfsqrt_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfsqrt_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfsqrt_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfsqrt_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfsqrt_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfsqrt_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfsqrt_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfsqrt_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfsqrt_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfsqrt_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfsqrt_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m8_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfsqrt_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfsqrt_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfsqrt_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfsqrt_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfsqrt_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfsqrt_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfsqrt_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfsqrt_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfsqrt_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfsqrt_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m4_mu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfsqrt_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfsqrt_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfsqrt_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfsqrt_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfsqrt_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfsqrt_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfsqrt_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfsqrt_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfsqrt_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfsqrt_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfsqrt_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfsqrt_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfsqrt_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfsqrt_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfsqrt_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfsqrt_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfsqrt_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfsqrt_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfsqrt_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfsqrt_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m8_mu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfsqrt_v_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfsqrt_v_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsqrt_v_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfsqrt_v_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsqrt_v_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfsqrt_v_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsqrt_v_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfsqrt_v_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsqrt_v_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfsqrt_v_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsqrt_v_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfsqrt_v_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f16m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsqrt_v_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfsqrt_v_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsqrt_v_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfsqrt_v_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsqrt_v_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfsqrt_v_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsqrt_v_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfsqrt_v_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsqrt_v_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfsqrt_v_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f32m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsqrt_v_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfsqrt_v_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f64m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsqrt_v_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfsqrt_v_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f64m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsqrt_v_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfsqrt_v_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f64m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsqrt_v_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfsqrt_v_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t vl) { return __riscv_vfsqrt_v_f64m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsqrt_v_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfsqrt_v_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsqrt_v_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfsqrt_v_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsqrt_v_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfsqrt_v_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsqrt_v_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfsqrt_v_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsqrt_v_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfsqrt_v_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsqrt_v_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfsqrt_v_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsqrt_v_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfsqrt_v_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsqrt_v_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfsqrt_v_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsqrt_v_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfsqrt_v_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsqrt_v_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfsqrt_v_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsqrt_v_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfsqrt_v_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsqrt_v_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfsqrt_v_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsqrt_v_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfsqrt_v_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsqrt_v_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfsqrt_v_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsqrt_v_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfsqrt_v_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsqrt_v_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfsqrt_v_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsqrt_v_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfsqrt_v_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsqrt_v_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfsqrt_v_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsqrt_v_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfsqrt_v_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsqrt_v_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfsqrt_v_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsqrt_v_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfsqrt_v_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsqrt_v_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfsqrt_v_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsqrt_v_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfsqrt_v_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsqrt_v_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfsqrt_v_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsqrt_v_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfsqrt_v_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsqrt_v_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfsqrt_v_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsqrt_v_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfsqrt_v_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsqrt_v_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfsqrt_v_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsqrt_v_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfsqrt_v_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsqrt_v_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfsqrt_v_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsqrt_v_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat16mf4_t test_vfsqrt_v_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsqrt_v_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat16mf2_t test_vfsqrt_v_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsqrt_v_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat16m1_t test_vfsqrt_v_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsqrt_v_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat16m2_t test_vfsqrt_v_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsqrt_v_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat16m4_t test_vfsqrt_v_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsqrt_v_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vl) { +vfloat16m8_t test_vfsqrt_v_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f16m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsqrt_v_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat32mf2_t test_vfsqrt_v_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsqrt_v_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat32m1_t test_vfsqrt_v_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsqrt_v_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat32m2_t test_vfsqrt_v_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsqrt_v_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat32m4_t test_vfsqrt_v_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsqrt_v_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vl) { +vfloat32m8_t test_vfsqrt_v_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f32m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsqrt_v_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vl) { +vfloat64m1_t test_vfsqrt_v_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsqrt_v_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vl) { +vfloat64m2_t test_vfsqrt_v_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsqrt_v_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vl) { +vfloat64m4_t test_vfsqrt_v_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsqrt_v_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vl) { +vfloat64m8_t test_vfsqrt_v_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vl) { return __riscv_vfsqrt_v_f64m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfsub.c index c5434f133..c8b8985f2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsub.c @@ -6,962 +6,1349 @@ #include -vfloat16mf4_t test_vfsub_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsub_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfsub_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsub_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsub_vf_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsub_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsub_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfsub_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsub_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsub_vf_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsub_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsub_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfsub_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsub_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsub_vf_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsub_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsub_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfsub_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsub_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsub_vf_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsub_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsub_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfsub_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsub_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsub_vf_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsub_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsub_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfsub_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsub_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsub_vf_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsub_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsub_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfsub_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsub_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsub_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsub_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsub_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfsub_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsub_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsub_vf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsub_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsub_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfsub_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsub_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsub_vf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsub_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsub_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfsub_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsub_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsub_vf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsub_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsub_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfsub_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsub_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsub_vf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsub_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsub_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfsub_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsub_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsub_vf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfsub_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsub_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsub_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfsub_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsub_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsub_vf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfsub_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsub_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsub_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfsub_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsub_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsub_vf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfsub_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsub_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsub_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfsub_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsub_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsub_vf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfsub_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsub_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsub_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsub_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsub_vf_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsub_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsub_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsub_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsub_vf_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsub_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsub_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsub_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsub_vf_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsub_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsub_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsub_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsub_vf_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsub_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsub_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsub_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsub_vf_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsub_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsub_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsub_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsub_vf_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsub_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsub_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsub_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsub_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsub_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsub_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsub_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsub_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsub_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsub_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsub_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsub_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsub_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsub_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsub_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsub_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsub_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsub_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsub_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsub_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsub_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsub_vf_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsub_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsub_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsub_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsub_vf_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsub_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsub_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsub_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsub_vf_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsub_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsub_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsub_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsub_vf_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsub_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsub_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsub_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsub_vf_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsub_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsub_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsub_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsub_vf_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsub_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsub_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsub_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsub_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsub_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsub_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsub_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsub_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsub_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsub_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsub_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsub_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsub_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsub_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsub_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsub_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsub_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsub_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsub_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsub_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vfsub_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsub_vf_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vfsub_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsub_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vfsub_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsub_vf_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vfsub_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsub_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vfsub_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsub_vf_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vfsub_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsub_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vfsub_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsub_vf_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vfsub_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsub_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vfsub_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsub_vf_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vfsub_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsub_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vfsub_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsub_vf_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfsub_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsub_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfsub_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsub_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfsub_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsub_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfsub_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsub_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfsub_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsub_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfsub_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsub_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfsub_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsub_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfsub_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsub_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfsub_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsub_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, size_t vl) { return __riscv_vfsub_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vfsub_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsub_vv_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfsub_vv_f16mf4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsub_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsub_vf_f16mf4_rm_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16mf4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsub_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsub_vv_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfsub_vv_f16mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsub_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsub_vf_f16mf2_rm_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsub_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsub_vv_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfsub_vv_f16m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsub_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsub_vf_f16m1_rm_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsub_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsub_vv_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfsub_vv_f16m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsub_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsub_vf_f16m2_rm_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsub_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsub_vv_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfsub_vv_f16m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsub_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsub_vf_f16m4_rm_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsub_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsub_vv_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vfsub_vv_f16m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsub_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsub_vf_f16m8_rm_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfsub_vf_f16m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsub_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsub_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfsub_vv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsub_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsub_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfsub_vv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsub_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsub_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsub_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsub_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfsub_vv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsub_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsub_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsub_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsub_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfsub_vv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsub_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsub_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsub_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsub_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vfsub_vv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsub_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsub_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vfsub_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsub_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsub_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfsub_vv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsub_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsub_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + double rs1, size_t vl) { return __riscv_vfsub_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsub_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsub_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, size_t vl) { return __riscv_vfsub_vv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsub_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsub_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + double rs1, size_t vl) { return __riscv_vfsub_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsub_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsub_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, size_t vl) { return __riscv_vfsub_vv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsub_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsub_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + double rs1, size_t vl) { return __riscv_vfsub_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsub_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsub_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vfsub_vv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsub_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsub_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vfsub_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsub_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsub_vv_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsub_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsub_vf_f16mf4_rm_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsub_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsub_vv_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsub_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsub_vf_f16mf2_rm_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsub_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsub_vv_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsub_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsub_vf_f16m1_rm_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsub_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsub_vv_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsub_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsub_vf_f16m2_rm_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsub_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsub_vv_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsub_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsub_vf_f16m4_rm_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsub_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsub_vv_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsub_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsub_vf_f16m8_rm_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsub_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsub_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsub_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsub_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsub_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsub_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsub_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsub_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsub_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsub_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsub_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsub_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsub_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsub_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsub_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsub_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsub_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsub_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsub_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsub_vv_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsub_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsub_vf_f16mf4_rm_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsub_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsub_vv_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsub_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsub_vf_f16mf2_rm_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsub_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsub_vv_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsub_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsub_vf_f16m1_rm_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsub_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsub_vv_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsub_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsub_vf_f16m2_rm_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsub_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsub_vv_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsub_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsub_vf_f16m4_rm_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsub_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsub_vv_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsub_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsub_vf_f16m8_rm_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsub_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsub_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsub_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsub_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsub_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsub_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsub_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsub_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsub_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsub_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsub_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsub_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsub_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsub_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsub_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsub_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsub_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsub_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsub_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vfsub_vv_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf4_t test_vfsub_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf4_t test_vfsub_vf_f16mf4_rm_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsub_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vfsub_vv_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16mf2_t test_vfsub_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat16mf2_t test_vfsub_vf_f16mf2_rm_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsub_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vfsub_vv_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m1_t test_vfsub_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat16m1_t test_vfsub_vf_f16m1_rm_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsub_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vfsub_vv_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m2_t test_vfsub_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat16m2_t test_vfsub_vf_f16m2_rm_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsub_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vfsub_vv_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m4_t test_vfsub_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat16m4_t test_vfsub_vf_f16m4_rm_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsub_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vfsub_vv_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vfloat16m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f16m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat16m8_t test_vfsub_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vfloat16m8_t test_vfsub_vf_f16m8_rm_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfsub_vf_f16m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsub_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vfsub_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat32mf2_t test_vfsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfsub_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsub_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfsub_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat32m1_t test_vfsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsub_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vfsub_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat32m2_t test_vfsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsub_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vfsub_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat32m4_t test_vfsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsub_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vfsub_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat32m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vfloat32m8_t test_vfsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, float rs1, size_t vl) { return __riscv_vfsub_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsub_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfsub_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vfloat64m1_t test_vfsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsub_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vfsub_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vfloat64m2_t test_vfsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsub_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vfsub_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vfloat64m4_t test_vfsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsub_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vfsub_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat64m8_t vs1, + size_t vl) { return __riscv_vfsub_vv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vfloat64m8_t test_vfsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, double rs1, + size_t vl) { return __riscv_vfsub_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwadd.c b/auto-generated/policy_funcs/llvm-api-tests/vfwadd.c index 31be8619f..716dd3d8d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwadd.c @@ -6,1154 +6,1639 @@ #include -vfloat32mf2_t test_vfwadd_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwadd_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwadd_wv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwadd_wf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwadd_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwadd_vf_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_vf_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwadd_wv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_wv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwadd_wf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_wf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwadd_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwadd_vf_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_vf_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwadd_wv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_wv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwadd_wf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_wf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwadd_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwadd_vf_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_vf_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwadd_wv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_wv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwadd_wf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_wf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwadd_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwadd_vf_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_vf_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwadd_wv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_wv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwadd_wf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_wf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwadd_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwadd_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwadd_vf_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_vf_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwadd_wv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_wv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwadd_wv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwadd_wf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_wf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwadd_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwadd_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwadd_vf_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_vf_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwadd_wv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_wv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwadd_wv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwadd_wf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_wf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwadd_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwadd_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwadd_vf_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_vf_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwadd_wv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_wv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwadd_wv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwadd_wf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_wf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwadd_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwadd_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwadd_vf_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_vf_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwadd_wv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_wv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwadd_wv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwadd_wf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_wf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwadd_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwadd_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwadd_wv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwadd_wf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwadd_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwadd_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwadd_wv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_wv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwadd_wf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_wf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwadd_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwadd_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwadd_wv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_wv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwadd_wf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_wf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwadd_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwadd_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwadd_wv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_wv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwadd_wf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_wf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwadd_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwadd_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwadd_wv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_wv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwadd_wf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_wf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwadd_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwadd_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwadd_wv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_wv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwadd_wf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_wf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwadd_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwadd_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwadd_wv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_wv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwadd_wf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_wf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwadd_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwadd_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwadd_wv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_wv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwadd_wf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_wf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwadd_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwadd_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwadd_wv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_wv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwadd_wf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_wf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwadd_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwadd_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwadd_wv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwadd_wf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwadd_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwadd_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwadd_wv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_wv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwadd_wf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_wf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwadd_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwadd_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwadd_wv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_wv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwadd_wf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_wf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwadd_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwadd_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwadd_wv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_wv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwadd_wf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_wf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwadd_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwadd_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwadd_wv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_wv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwadd_wf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_wf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwadd_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwadd_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwadd_wv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_wv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwadd_wf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_wf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwadd_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwadd_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwadd_wv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_wv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwadd_wf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_wf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwadd_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwadd_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwadd_wv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_wv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwadd_wf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_wf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwadd_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwadd_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwadd_wv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_wv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwadd_wf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_wf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwadd_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwadd_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwadd_wv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwadd_wf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwadd_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwadd_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwadd_wv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_wv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwadd_wf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_wf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwadd_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwadd_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwadd_wv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_wv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwadd_wf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_wf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwadd_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwadd_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwadd_wv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_wv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwadd_wf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_wf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwadd_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwadd_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwadd_wv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_wv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwadd_wf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_wf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwadd_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwadd_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwadd_wv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_wv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwadd_wf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_wf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwadd_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwadd_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwadd_wv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_wv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwadd_wf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_wf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwadd_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwadd_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwadd_wv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_wv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwadd_wf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_wf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwadd_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwadd_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwadd_wv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_wv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwadd_wf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_wf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwadd_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_wv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_wf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_wv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_wv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_wf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_wf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_wv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_wv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_wf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_wf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_wv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_wv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_wf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_wf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwadd_vv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_wv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_wv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwadd_wv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_wf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_wf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwadd_wf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwadd_vv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_wv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_wv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwadd_wv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_wf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_wf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwadd_vv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_wv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_wv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwadd_wv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_wf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_wf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwadd_vv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_wv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_wv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwadd_wv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_wf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_wf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwadd_vv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_wv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_wv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwadd_wv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_wf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_wf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + float rs1, size_t vl) { return __riscv_vfwadd_wf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_wv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_wf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_wv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_wv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_wf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_wf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_wv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_wv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_wf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_wf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_wv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_wv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_wf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_wf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_wv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_wv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_wf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_wf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_wv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_wv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_wf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_wf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_wv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_wv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_wf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_wf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_wv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_wv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_wf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_wf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_wv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_wv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_wf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_wf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { - return __riscv_vfwadd_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwadd_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { + return __riscv_vfwadd_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwadd_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfwadd_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwadd_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfwadd_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwadd_wv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { - return __riscv_vfwadd_wv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwadd_wv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, + vfloat16mf4_t vs1, size_t vl) { + return __riscv_vfwadd_wv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwadd_wf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfwadd_wf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwadd_wf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfwadd_wf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwadd_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_wv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_wv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_wf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_wf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_wv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_wv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_wf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_wf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_wv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_wv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_wf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_wf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_wv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_wv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_wf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_wf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_wv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_wv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_wf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_wf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_wv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_wv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_wf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_wf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_wv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_wv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_wf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_wf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_wv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_wv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_wf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_wf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_wv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwadd_wf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwadd_wf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_wv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwadd_wv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwadd_wf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwadd_wf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_wv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwadd_wv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwadd_wf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwadd_wf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_wv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwadd_wv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwadd_wf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwadd_wf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_wv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwadd_wv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwadd_wf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwadd_wf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwadd_wf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_wv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwadd_wv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwadd_wf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwadd_wf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_wv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwadd_wv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwadd_wf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwadd_wf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_wv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwadd_wv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwadd_wf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwadd_wf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_vv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_wv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwadd_wv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwadd_wv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwadd_wf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwadd_wf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, + size_t vl) { return __riscv_vfwadd_wf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c b/auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c index 1f40b1ef0..54b67176b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c @@ -6,1202 +6,1502 @@ #include -vfloat16mf4_t test_vfwcvt_f_x_v_f16mf4_tu(vfloat16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vfloat16mf4_t test_vfwcvt_f_x_v_f16mf4_tu(vfloat16mf4_t vd, vint8mf8_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfwcvt_f_x_v_f16mf2_tu(vfloat16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vfloat16mf2_t test_vfwcvt_f_x_v_f16mf2_tu(vfloat16mf2_t vd, vint8mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfwcvt_f_x_v_f16m1_tu(vfloat16m1_t vd, vint8mf2_t vs2, size_t vl) { +vfloat16m1_t test_vfwcvt_f_x_v_f16m1_tu(vfloat16m1_t vd, vint8mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfwcvt_f_x_v_f16m2_tu(vfloat16m2_t vd, vint8m1_t vs2, size_t vl) { +vfloat16m2_t test_vfwcvt_f_x_v_f16m2_tu(vfloat16m2_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfwcvt_f_x_v_f16m4_tu(vfloat16m4_t vd, vint8m2_t vs2, size_t vl) { +vfloat16m4_t test_vfwcvt_f_x_v_f16m4_tu(vfloat16m4_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f16m4_tu(vd, vs2, vl); } -vfloat16m8_t test_vfwcvt_f_x_v_f16m8_tu(vfloat16m8_t vd, vint8m4_t vs2, size_t vl) { +vfloat16m8_t test_vfwcvt_f_x_v_f16m8_tu(vfloat16m8_t vd, vint8m4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f16m8_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfwcvt_f_xu_v_f16mf4_tu(vfloat16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vfloat16mf4_t test_vfwcvt_f_xu_v_f16mf4_tu(vfloat16mf4_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f16mf4_tu(vd, vs2, vl); } -vfloat16mf2_t test_vfwcvt_f_xu_v_f16mf2_tu(vfloat16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vfloat16mf2_t test_vfwcvt_f_xu_v_f16mf2_tu(vfloat16mf2_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f16mf2_tu(vd, vs2, vl); } -vfloat16m1_t test_vfwcvt_f_xu_v_f16m1_tu(vfloat16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vfloat16m1_t test_vfwcvt_f_xu_v_f16m1_tu(vfloat16m1_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m1_tu(vd, vs2, vl); } -vfloat16m2_t test_vfwcvt_f_xu_v_f16m2_tu(vfloat16m2_t vd, vuint8m1_t vs2, size_t vl) { +vfloat16m2_t test_vfwcvt_f_xu_v_f16m2_tu(vfloat16m2_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m2_tu(vd, vs2, vl); } -vfloat16m4_t test_vfwcvt_f_xu_v_f16m4_tu(vfloat16m4_t vd, vuint8m2_t vs2, size_t vl) { +vfloat16m4_t test_vfwcvt_f_xu_v_f16m4_tu(vfloat16m4_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m4_tu(vd, vs2, vl); } -vfloat16m8_t test_vfwcvt_f_xu_v_f16m8_tu(vfloat16m8_t vd, vuint8m4_t vs2, size_t vl) { +vfloat16m8_t test_vfwcvt_f_xu_v_f16m8_tu(vfloat16m8_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m8_tu(vd, vs2, vl); } -vint32mf2_t test_vfwcvt_x_f_v_i32mf2_tu(vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_x_f_v_i32mf2_tu(vint32mf2_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32mf2_tu(vd, vs2, vl); } -vint32m1_t test_vfwcvt_x_f_v_i32m1_tu(vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_x_f_v_i32m1_tu(vint32m1_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32m1_tu(vd, vs2, vl); } -vint32m2_t test_vfwcvt_x_f_v_i32m2_tu(vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_x_f_v_i32m2_tu(vint32m2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32m2_tu(vd, vs2, vl); } -vint32m4_t test_vfwcvt_x_f_v_i32m4_tu(vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_x_f_v_i32m4_tu(vint32m4_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32m4_tu(vd, vs2, vl); } -vint32m8_t test_vfwcvt_x_f_v_i32m8_tu(vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_x_f_v_i32m8_tu(vint32m8_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_tu(vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_tu(vuint32mf2_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vfwcvt_xu_f_v_u32m1_tu(vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_xu_f_v_u32m1_tu(vuint32m1_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vfwcvt_xu_f_v_u32m2_tu(vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_xu_f_v_u32m2_tu(vuint32m2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vfwcvt_xu_f_v_u32m4_tu(vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_xu_f_v_u32m4_tu(vuint32m4_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vfwcvt_xu_f_v_u32m8_tu(vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_xu_f_v_u32m8_tu(vuint32m8_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_x_v_f32mf2_tu(vfloat32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_x_v_f32mf2_tu(vfloat32mf2_t vd, vint16mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_x_v_f32m1_tu(vfloat32m1_t vd, vint16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_x_v_f32m1_tu(vfloat32m1_t vd, vint16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_x_v_f32m2_tu(vfloat32m2_t vd, vint16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_x_v_f32m2_tu(vfloat32m2_t vd, vint16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_x_v_f32m4_tu(vfloat32m4_t vd, vint16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_x_v_f32m4_tu(vfloat32m4_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_x_v_f32m8_tu(vfloat32m8_t vd, vint16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_x_v_f32m8_tu(vfloat32m8_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f32m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_xu_v_f32mf2_tu(vfloat32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_xu_v_f32mf2_tu(vfloat32mf2_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_xu_v_f32m1_tu(vfloat32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_xu_v_f32m1_tu(vfloat32m1_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_xu_v_f32m2_tu(vfloat32m2_t vd, vuint16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_xu_v_f32m2_tu(vfloat32m2_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_xu_v_f32m4_tu(vfloat32m4_t vd, vuint16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_xu_v_f32m4_tu(vfloat32m4_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_xu_v_f32m8_tu(vfloat32m8_t vd, vuint16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_xu_v_f32m8_tu(vfloat32m8_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_f_v_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_f_v_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_f_v_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_f_v_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_f_v_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_f_v_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_f_v_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_f_v_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_f_v_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_f_v_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f32m8_tu(vd, vs2, vl); } -vint64m1_t test_vfwcvt_x_f_v_i64m1_tu(vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_x_f_v_i64m1_tu(vint64m1_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i64m1_tu(vd, vs2, vl); } -vint64m2_t test_vfwcvt_x_f_v_i64m2_tu(vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_x_f_v_i64m2_tu(vint64m2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i64m2_tu(vd, vs2, vl); } -vint64m4_t test_vfwcvt_x_f_v_i64m4_tu(vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_x_f_v_i64m4_tu(vint64m4_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i64m4_tu(vd, vs2, vl); } -vint64m8_t test_vfwcvt_x_f_v_i64m8_tu(vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_x_f_v_i64m8_tu(vint64m8_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i64m8_tu(vd, vs2, vl); } -vuint64m1_t test_vfwcvt_xu_f_v_u64m1_tu(vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_xu_f_v_u64m1_tu(vuint64m1_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vfwcvt_xu_f_v_u64m2_tu(vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_xu_f_v_u64m2_tu(vuint64m2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vfwcvt_xu_f_v_u64m4_tu(vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_xu_f_v_u64m4_tu(vuint64m4_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vfwcvt_xu_f_v_u64m8_tu(vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_xu_f_v_u64m8_tu(vuint64m8_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_x_v_f64m1_tu(vfloat64m1_t vd, vint32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_x_v_f64m1_tu(vfloat64m1_t vd, vint32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_x_v_f64m2_tu(vfloat64m2_t vd, vint32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_x_v_f64m2_tu(vfloat64m2_t vd, vint32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_x_v_f64m4_tu(vfloat64m4_t vd, vint32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_x_v_f64m4_tu(vfloat64m4_t vd, vint32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_x_v_f64m8_tu(vfloat64m8_t vd, vint32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_x_v_f64m8_tu(vfloat64m8_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_x_v_f64m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_xu_v_f64m1_tu(vfloat64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_xu_v_f64m1_tu(vfloat64m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_xu_v_f64m2_tu(vfloat64m2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_xu_v_f64m2_tu(vfloat64m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_xu_v_f64m4_tu(vfloat64m4_t vd, vuint32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_xu_v_f64m4_tu(vfloat64m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_xu_v_f64m8_tu(vfloat64m8_t vd, vuint32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_xu_v_f64m8_tu(vfloat64m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m8_tu(vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_f_v_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_f_v_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f64m1_tu(vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_f_v_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_f_v_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f64m2_tu(vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_f_v_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_f_v_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f64m4_tu(vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_f_v_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_f_v_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_f_f_v_f64m8_tu(vd, vs2, vl); } -vfloat16mf4_t test_vfwcvt_f_x_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vfloat16mf4_t test_vfwcvt_f_x_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfwcvt_f_x_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vfloat16mf2_t test_vfwcvt_f_x_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfwcvt_f_x_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vint8mf2_t vs2, size_t vl) { +vfloat16m1_t test_vfwcvt_f_x_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfwcvt_f_x_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vint8m1_t vs2, size_t vl) { +vfloat16m2_t test_vfwcvt_f_x_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vint8m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfwcvt_f_x_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vint8m2_t vs2, size_t vl) { +vfloat16m4_t test_vfwcvt_f_x_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vint8m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m4_tum(vm, vd, vs2, vl); } -vfloat16m8_t test_vfwcvt_f_x_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vint8m4_t vs2, size_t vl) { +vfloat16m8_t test_vfwcvt_f_x_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vint8m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m8_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfwcvt_f_xu_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vfloat16mf4_t test_vfwcvt_f_xu_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16mf4_tum(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfwcvt_f_xu_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vfloat16mf2_t test_vfwcvt_f_xu_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16mf2_tum(vm, vd, vs2, vl); } -vfloat16m1_t test_vfwcvt_f_xu_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vfloat16m1_t test_vfwcvt_f_xu_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m1_tum(vm, vd, vs2, vl); } -vfloat16m2_t test_vfwcvt_f_xu_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vuint8m1_t vs2, size_t vl) { +vfloat16m2_t test_vfwcvt_f_xu_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m2_tum(vm, vd, vs2, vl); } -vfloat16m4_t test_vfwcvt_f_xu_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vuint8m2_t vs2, size_t vl) { +vfloat16m4_t test_vfwcvt_f_xu_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m4_tum(vm, vd, vs2, vl); } -vfloat16m8_t test_vfwcvt_f_xu_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vuint8m4_t vs2, size_t vl) { +vfloat16m8_t test_vfwcvt_f_xu_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vuint8m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m8_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vfwcvt_x_f_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_x_f_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vfwcvt_x_f_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_x_f_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vfwcvt_x_f_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_x_f_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vfwcvt_x_f_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_x_f_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m4_tum(vm, vd, vs2, vl); } -vint32m8_t test_vfwcvt_x_f_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_x_f_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vfwcvt_xu_f_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_xu_f_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vfwcvt_xu_f_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_xu_f_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vfwcvt_xu_f_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_xu_f_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vfwcvt_xu_f_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_xu_f_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_x_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_x_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_x_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vint16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_x_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_x_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vint16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_x_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_x_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vint16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_x_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_x_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vint16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_x_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_xu_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_xu_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_xu_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_xu_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_xu_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vuint16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_xu_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_xu_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vuint16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_xu_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_xu_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vuint16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_xu_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_f_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_f_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m8_tum(vm, vd, vs2, vl); } -vint64m1_t test_vfwcvt_x_f_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_x_f_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m1_tum(vm, vd, vs2, vl); } -vint64m2_t test_vfwcvt_x_f_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_x_f_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m2_tum(vm, vd, vs2, vl); } -vint64m4_t test_vfwcvt_x_f_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_x_f_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m4_tum(vm, vd, vs2, vl); } -vint64m8_t test_vfwcvt_x_f_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_x_f_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vfwcvt_xu_f_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_xu_f_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vfwcvt_xu_f_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_xu_f_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vfwcvt_xu_f_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_xu_f_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vfwcvt_xu_f_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_xu_f_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_x_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vint32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_x_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_x_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vint32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_x_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_x_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vint32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_x_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_x_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vint32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_x_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_xu_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_xu_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_xu_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_xu_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_xu_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vuint32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_xu_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_xu_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vuint32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_xu_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m8_tum(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_f_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_f_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m1_tum(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_f_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_f_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m2_tum(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_f_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_f_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m4_tum(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_f_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_f_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m8_tum(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfwcvt_f_x_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vfloat16mf4_t test_vfwcvt_f_x_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfwcvt_f_x_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vfloat16mf2_t test_vfwcvt_f_x_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfwcvt_f_x_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vint8mf2_t vs2, size_t vl) { +vfloat16m1_t test_vfwcvt_f_x_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfwcvt_f_x_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vint8m1_t vs2, size_t vl) { +vfloat16m2_t test_vfwcvt_f_x_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vint8m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfwcvt_f_x_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vint8m2_t vs2, size_t vl) { +vfloat16m4_t test_vfwcvt_f_x_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vint8m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfwcvt_f_x_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vint8m4_t vs2, size_t vl) { +vfloat16m8_t test_vfwcvt_f_x_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vint8m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m8_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfwcvt_f_xu_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vfloat16mf4_t test_vfwcvt_f_xu_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16mf4_tumu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfwcvt_f_xu_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vfloat16mf2_t test_vfwcvt_f_xu_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16mf2_tumu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfwcvt_f_xu_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vfloat16m1_t test_vfwcvt_f_xu_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m1_tumu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfwcvt_f_xu_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vuint8m1_t vs2, size_t vl) { +vfloat16m2_t test_vfwcvt_f_xu_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m2_tumu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfwcvt_f_xu_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vuint8m2_t vs2, size_t vl) { +vfloat16m4_t test_vfwcvt_f_xu_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m4_tumu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfwcvt_f_xu_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vuint8m4_t vs2, size_t vl) { +vfloat16m8_t test_vfwcvt_f_xu_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vuint8m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m8_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vfwcvt_x_f_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_x_f_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vfwcvt_x_f_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_x_f_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vfwcvt_x_f_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_x_f_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vfwcvt_x_f_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_x_f_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m4_tumu(vm, vd, vs2, vl); } -vint32m8_t test_vfwcvt_x_f_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_x_f_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vfwcvt_xu_f_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_xu_f_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vfwcvt_xu_f_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_xu_f_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vfwcvt_xu_f_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_xu_f_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vfwcvt_xu_f_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_xu_f_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_x_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_x_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_x_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vint16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_x_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_x_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vint16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_x_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_x_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vint16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_x_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_x_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vint16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_x_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_xu_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_xu_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_xu_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_xu_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_xu_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vuint16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_xu_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_xu_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vuint16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_xu_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_xu_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vuint16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_xu_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_f_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_f_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m8_tumu(vm, vd, vs2, vl); } -vint64m1_t test_vfwcvt_x_f_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_x_f_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m1_tumu(vm, vd, vs2, vl); } -vint64m2_t test_vfwcvt_x_f_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_x_f_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m2_tumu(vm, vd, vs2, vl); } -vint64m4_t test_vfwcvt_x_f_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_x_f_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m4_tumu(vm, vd, vs2, vl); } -vint64m8_t test_vfwcvt_x_f_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_x_f_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vfwcvt_xu_f_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_xu_f_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vfwcvt_xu_f_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_xu_f_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vfwcvt_xu_f_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_xu_f_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vfwcvt_xu_f_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_xu_f_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_x_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vint32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_x_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_x_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vint32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_x_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_x_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vint32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_x_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_x_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vint32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_x_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_xu_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_xu_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_xu_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_xu_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_xu_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vuint32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_xu_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_xu_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vuint32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_xu_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m8_tumu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_f_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_f_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m1_tumu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_f_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_f_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m2_tumu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_f_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_f_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m4_tumu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_f_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_f_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m8_tumu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfwcvt_f_x_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vfloat16mf4_t test_vfwcvt_f_x_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfwcvt_f_x_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vfloat16mf2_t test_vfwcvt_f_x_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfwcvt_f_x_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vint8mf2_t vs2, size_t vl) { +vfloat16m1_t test_vfwcvt_f_x_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfwcvt_f_x_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vint8m1_t vs2, size_t vl) { +vfloat16m2_t test_vfwcvt_f_x_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vint8m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfwcvt_f_x_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vint8m2_t vs2, size_t vl) { +vfloat16m4_t test_vfwcvt_f_x_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vint8m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m4_mu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfwcvt_f_x_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vint8m4_t vs2, size_t vl) { +vfloat16m8_t test_vfwcvt_f_x_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vint8m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f16m8_mu(vm, vd, vs2, vl); } -vfloat16mf4_t test_vfwcvt_f_xu_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vfloat16mf4_t test_vfwcvt_f_xu_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16mf4_mu(vm, vd, vs2, vl); } -vfloat16mf2_t test_vfwcvt_f_xu_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vfloat16mf2_t test_vfwcvt_f_xu_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16mf2_mu(vm, vd, vs2, vl); } -vfloat16m1_t test_vfwcvt_f_xu_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vfloat16m1_t test_vfwcvt_f_xu_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m1_mu(vm, vd, vs2, vl); } -vfloat16m2_t test_vfwcvt_f_xu_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vuint8m1_t vs2, size_t vl) { +vfloat16m2_t test_vfwcvt_f_xu_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m2_mu(vm, vd, vs2, vl); } -vfloat16m4_t test_vfwcvt_f_xu_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vuint8m2_t vs2, size_t vl) { +vfloat16m4_t test_vfwcvt_f_xu_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m4_mu(vm, vd, vs2, vl); } -vfloat16m8_t test_vfwcvt_f_xu_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vuint8m4_t vs2, size_t vl) { +vfloat16m8_t test_vfwcvt_f_xu_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vuint8m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f16m8_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vfwcvt_x_f_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_x_f_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vfwcvt_x_f_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_x_f_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vfwcvt_x_f_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_x_f_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vfwcvt_x_f_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_x_f_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m4_mu(vm, vd, vs2, vl); } -vint32m8_t test_vfwcvt_x_f_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_x_f_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vfwcvt_xu_f_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_xu_f_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vfwcvt_xu_f_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_xu_f_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vfwcvt_xu_f_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_xu_f_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vfwcvt_xu_f_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_xu_f_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_x_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_x_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_x_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vint16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_x_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_x_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vint16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_x_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_x_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vint16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_x_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_x_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vint16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_x_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f32m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_xu_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_xu_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_xu_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_xu_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_xu_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vuint16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_xu_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_xu_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vuint16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_xu_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_xu_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vuint16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_xu_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f32m8_mu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvt_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvt_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvt_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvt_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvt_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvt_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvt_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvt_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvt_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvt_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f32m8_mu(vm, vd, vs2, vl); } -vint64m1_t test_vfwcvt_x_f_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_x_f_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m1_mu(vm, vd, vs2, vl); } -vint64m2_t test_vfwcvt_x_f_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_x_f_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m2_mu(vm, vd, vs2, vl); } -vint64m4_t test_vfwcvt_x_f_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_x_f_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m4_mu(vm, vd, vs2, vl); } -vint64m8_t test_vfwcvt_x_f_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_x_f_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vfwcvt_xu_f_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_xu_f_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vfwcvt_xu_f_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_xu_f_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vfwcvt_xu_f_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_xu_f_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vfwcvt_xu_f_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_xu_f_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_x_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vint32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_x_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_x_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vint32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_x_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_x_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vint32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_x_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_x_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vint32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_x_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_x_v_f64m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_xu_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_xu_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_xu_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vuint32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_xu_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_xu_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vuint32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_xu_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_xu_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vuint32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_xu_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_xu_v_f64m8_mu(vm, vd, vs2, vl); } -vfloat64m1_t test_vfwcvt_f_f_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwcvt_f_f_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m1_mu(vm, vd, vs2, vl); } -vfloat64m2_t test_vfwcvt_f_f_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwcvt_f_f_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m2_mu(vm, vd, vs2, vl); } -vfloat64m4_t test_vfwcvt_f_f_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwcvt_f_f_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m4_mu(vm, vd, vs2, vl); } -vfloat64m8_t test_vfwcvt_f_f_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwcvt_f_f_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_f_f_v_f64m8_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vfwcvt_x_f_v_i32mf2_rm_tu(vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_x_f_v_i32mf2_rm_tu(vint32mf2_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfwcvt_x_f_v_i32m1_rm_tu(vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_x_f_v_i32m1_rm_tu(vint32m1_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfwcvt_x_f_v_i32m2_rm_tu(vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_x_f_v_i32m2_rm_tu(vint32m2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfwcvt_x_f_v_i32m4_rm_tu(vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_x_f_v_i32m4_rm_tu(vint32m4_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m8_t test_vfwcvt_x_f_v_i32m8_rm_tu(vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_x_f_v_i32m8_rm_tu(vint32m8_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i32m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_rm_tu(vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_rm_tu(vuint32mf2_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfwcvt_xu_f_v_u32m1_rm_tu(vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_xu_f_v_u32m1_rm_tu(vuint32m1_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfwcvt_xu_f_v_u32m2_rm_tu(vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_xu_f_v_u32m2_rm_tu(vuint32m2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfwcvt_xu_f_v_u32m4_rm_tu(vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_xu_f_v_u32m4_rm_tu(vuint32m4_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m8_t test_vfwcvt_xu_f_v_u32m8_rm_tu(vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_xu_f_v_u32m8_rm_tu(vuint32m8_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m1_t test_vfwcvt_x_f_v_i64m1_rm_tu(vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_x_f_v_i64m1_rm_tu(vint64m1_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i64m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m2_t test_vfwcvt_x_f_v_i64m2_rm_tu(vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_x_f_v_i64m2_rm_tu(vint64m2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i64m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m4_t test_vfwcvt_x_f_v_i64m4_rm_tu(vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_x_f_v_i64m4_rm_tu(vint64m4_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i64m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m8_t test_vfwcvt_x_f_v_i64m8_rm_tu(vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_x_f_v_i64m8_rm_tu(vint64m8_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_x_f_v_i64m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m1_t test_vfwcvt_xu_f_v_u64m1_rm_tu(vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_xu_f_v_u64m1_rm_tu(vuint64m1_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m2_t test_vfwcvt_xu_f_v_u64m2_rm_tu(vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_xu_f_v_u64m2_rm_tu(vuint64m2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m4_t test_vfwcvt_xu_f_v_u64m4_rm_tu(vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_xu_f_v_u64m4_rm_tu(vuint64m4_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m8_t test_vfwcvt_xu_f_v_u64m8_rm_tu(vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_xu_f_v_u64m8_rm_tu(vuint64m8_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m8_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfwcvt_x_f_v_i32mf2_rm_tum(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_x_f_v_i32mf2_rm_tum(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfwcvt_x_f_v_i32m1_rm_tum(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_x_f_v_i32m1_rm_tum(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfwcvt_x_f_v_i32m2_rm_tum(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_x_f_v_i32m2_rm_tum(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfwcvt_x_f_v_i32m4_rm_tum(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_x_f_v_i32m4_rm_tum(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m8_t test_vfwcvt_x_f_v_i32m8_rm_tum(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_x_f_v_i32m8_rm_tum(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_rm_tum(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_rm_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfwcvt_xu_f_v_u32m1_rm_tum(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_xu_f_v_u32m1_rm_tum(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfwcvt_xu_f_v_u32m2_rm_tum(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_xu_f_v_u32m2_rm_tum(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfwcvt_xu_f_v_u32m4_rm_tum(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_xu_f_v_u32m4_rm_tum(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m8_t test_vfwcvt_xu_f_v_u32m8_rm_tum(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_xu_f_v_u32m8_rm_tum(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m1_t test_vfwcvt_x_f_v_i64m1_rm_tum(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_x_f_v_i64m1_rm_tum(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m2_t test_vfwcvt_x_f_v_i64m2_rm_tum(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_x_f_v_i64m2_rm_tum(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m4_t test_vfwcvt_x_f_v_i64m4_rm_tum(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_x_f_v_i64m4_rm_tum(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m8_t test_vfwcvt_x_f_v_i64m8_rm_tum(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_x_f_v_i64m8_rm_tum(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m1_t test_vfwcvt_xu_f_v_u64m1_rm_tum(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_xu_f_v_u64m1_rm_tum(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m2_t test_vfwcvt_xu_f_v_u64m2_rm_tum(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_xu_f_v_u64m2_rm_tum(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m4_t test_vfwcvt_xu_f_v_u64m4_rm_tum(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_xu_f_v_u64m4_rm_tum(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m8_t test_vfwcvt_xu_f_v_u64m8_rm_tum(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_xu_f_v_u64m8_rm_tum(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m8_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfwcvt_x_f_v_i32mf2_rm_tumu(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_x_f_v_i32mf2_rm_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfwcvt_x_f_v_i32m1_rm_tumu(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_x_f_v_i32m1_rm_tumu(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfwcvt_x_f_v_i32m2_rm_tumu(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_x_f_v_i32m2_rm_tumu(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfwcvt_x_f_v_i32m4_rm_tumu(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_x_f_v_i32m4_rm_tumu(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m8_t test_vfwcvt_x_f_v_i32m8_rm_tumu(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_x_f_v_i32m8_rm_tumu(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_rm_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_rm_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfwcvt_xu_f_v_u32m1_rm_tumu(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_xu_f_v_u32m1_rm_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfwcvt_xu_f_v_u32m2_rm_tumu(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_xu_f_v_u32m2_rm_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfwcvt_xu_f_v_u32m4_rm_tumu(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_xu_f_v_u32m4_rm_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m8_t test_vfwcvt_xu_f_v_u32m8_rm_tumu(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_xu_f_v_u32m8_rm_tumu(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m1_t test_vfwcvt_x_f_v_i64m1_rm_tumu(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_x_f_v_i64m1_rm_tumu(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m2_t test_vfwcvt_x_f_v_i64m2_rm_tumu(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_x_f_v_i64m2_rm_tumu(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m4_t test_vfwcvt_x_f_v_i64m4_rm_tumu(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_x_f_v_i64m4_rm_tumu(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m8_t test_vfwcvt_x_f_v_i64m8_rm_tumu(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_x_f_v_i64m8_rm_tumu(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m1_t test_vfwcvt_xu_f_v_u64m1_rm_tumu(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_xu_f_v_u64m1_rm_tumu(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m2_t test_vfwcvt_xu_f_v_u64m2_rm_tumu(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_xu_f_v_u64m2_rm_tumu(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m4_t test_vfwcvt_xu_f_v_u64m4_rm_tumu(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_xu_f_v_u64m4_rm_tumu(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m8_t test_vfwcvt_xu_f_v_u64m8_rm_tumu(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_xu_f_v_u64m8_rm_tumu(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m8_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32mf2_t test_vfwcvt_x_f_v_i32mf2_rm_mu(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_x_f_v_i32mf2_rm_mu(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m1_t test_vfwcvt_x_f_v_i32m1_rm_mu(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_x_f_v_i32m1_rm_mu(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m2_t test_vfwcvt_x_f_v_i32m2_rm_mu(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_x_f_v_i32m2_rm_mu(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m4_t test_vfwcvt_x_f_v_i32m4_rm_mu(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_x_f_v_i32m4_rm_mu(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint32m8_t test_vfwcvt_x_f_v_i32m8_rm_mu(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_x_f_v_i32m8_rm_mu(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i32m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_rm_mu(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_xu_f_v_u32mf2_rm_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m1_t test_vfwcvt_xu_f_v_u32m1_rm_mu(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_xu_f_v_u32m1_rm_mu(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m2_t test_vfwcvt_xu_f_v_u32m2_rm_mu(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_xu_f_v_u32m2_rm_mu(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m4_t test_vfwcvt_xu_f_v_u32m4_rm_mu(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_xu_f_v_u32m4_rm_mu(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint32m8_t test_vfwcvt_xu_f_v_u32m8_rm_mu(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_xu_f_v_u32m8_rm_mu(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u32m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m1_t test_vfwcvt_x_f_v_i64m1_rm_mu(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_x_f_v_i64m1_rm_mu(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m2_t test_vfwcvt_x_f_v_i64m2_rm_mu(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_x_f_v_i64m2_rm_mu(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m4_t test_vfwcvt_x_f_v_i64m4_rm_mu(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_x_f_v_i64m4_rm_mu(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vint64m8_t test_vfwcvt_x_f_v_i64m8_rm_mu(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_x_f_v_i64m8_rm_mu(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_x_f_v_i64m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m1_t test_vfwcvt_xu_f_v_u64m1_rm_mu(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_xu_f_v_u64m1_rm_mu(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m2_t test_vfwcvt_xu_f_v_u64m2_rm_mu(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_xu_f_v_u64m2_rm_mu(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m4_t test_vfwcvt_xu_f_v_u64m4_rm_mu(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_xu_f_v_u64m4_rm_mu(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } -vuint64m8_t test_vfwcvt_xu_f_v_u64m8_rm_mu(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_xu_f_v_u64m8_rm_mu(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_xu_f_v_u64m8_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c b/auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c index b13afd7c7..08d1ac345 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c @@ -6,290 +6,362 @@ #include -vint32mf2_t test_vfwcvt_rtz_x_f_v_i32mf2_tu(vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_rtz_x_f_v_i32mf2_tu(vint32mf2_t vd, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32mf2_tu(vd, vs2, vl); } -vint32m1_t test_vfwcvt_rtz_x_f_v_i32m1_tu(vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_rtz_x_f_v_i32m1_tu(vint32m1_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m1_tu(vd, vs2, vl); } -vint32m2_t test_vfwcvt_rtz_x_f_v_i32m2_tu(vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_rtz_x_f_v_i32m2_tu(vint32m2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m2_tu(vd, vs2, vl); } -vint32m4_t test_vfwcvt_rtz_x_f_v_i32m4_tu(vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_rtz_x_f_v_i32m4_tu(vint32m4_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m4_tu(vd, vs2, vl); } -vint32m8_t test_vfwcvt_rtz_x_f_v_i32m8_tu(vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_rtz_x_f_v_i32m8_tu(vint32m8_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vfwcvt_rtz_xu_f_v_u32mf2_tu(vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_rtz_xu_f_v_u32mf2_tu(vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vfwcvt_rtz_xu_f_v_u32m1_tu(vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_rtz_xu_f_v_u32m1_tu(vuint32m1_t vd, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vfwcvt_rtz_xu_f_v_u32m2_tu(vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_rtz_xu_f_v_u32m2_tu(vuint32m2_t vd, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vfwcvt_rtz_xu_f_v_u32m4_tu(vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_rtz_xu_f_v_u32m4_tu(vuint32m4_t vd, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vfwcvt_rtz_xu_f_v_u32m8_tu(vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_rtz_xu_f_v_u32m8_tu(vuint32m8_t vd, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m8_tu(vd, vs2, vl); } -vint64m1_t test_vfwcvt_rtz_x_f_v_i64m1_tu(vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_rtz_x_f_v_i64m1_tu(vint64m1_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m1_tu(vd, vs2, vl); } -vint64m2_t test_vfwcvt_rtz_x_f_v_i64m2_tu(vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_rtz_x_f_v_i64m2_tu(vint64m2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m2_tu(vd, vs2, vl); } -vint64m4_t test_vfwcvt_rtz_x_f_v_i64m4_tu(vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_rtz_x_f_v_i64m4_tu(vint64m4_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m4_tu(vd, vs2, vl); } -vint64m8_t test_vfwcvt_rtz_x_f_v_i64m8_tu(vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_rtz_x_f_v_i64m8_tu(vint64m8_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m8_tu(vd, vs2, vl); } -vuint64m1_t test_vfwcvt_rtz_xu_f_v_u64m1_tu(vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_rtz_xu_f_v_u64m1_tu(vuint64m1_t vd, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vfwcvt_rtz_xu_f_v_u64m2_tu(vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_rtz_xu_f_v_u64m2_tu(vuint64m2_t vd, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vfwcvt_rtz_xu_f_v_u64m4_tu(vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_rtz_xu_f_v_u64m4_tu(vuint64m4_t vd, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vfwcvt_rtz_xu_f_v_u64m8_tu(vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_rtz_xu_f_v_u64m8_tu(vuint64m8_t vd, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m8_tu(vd, vs2, vl); } -vint32mf2_t test_vfwcvt_rtz_x_f_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_rtz_x_f_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vfwcvt_rtz_x_f_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_rtz_x_f_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vfwcvt_rtz_x_f_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_rtz_x_f_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vfwcvt_rtz_x_f_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_rtz_x_f_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m4_tum(vm, vd, vs2, vl); } -vint32m8_t test_vfwcvt_rtz_x_f_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_rtz_x_f_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vfwcvt_rtz_xu_f_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_rtz_xu_f_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vfwcvt_rtz_xu_f_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_rtz_xu_f_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vfwcvt_rtz_xu_f_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_rtz_xu_f_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vfwcvt_rtz_xu_f_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_rtz_xu_f_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vfwcvt_rtz_xu_f_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_rtz_xu_f_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m8_tum(vm, vd, vs2, vl); } -vint64m1_t test_vfwcvt_rtz_x_f_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_rtz_x_f_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m1_tum(vm, vd, vs2, vl); } -vint64m2_t test_vfwcvt_rtz_x_f_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_rtz_x_f_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m2_tum(vm, vd, vs2, vl); } -vint64m4_t test_vfwcvt_rtz_x_f_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_rtz_x_f_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m4_tum(vm, vd, vs2, vl); } -vint64m8_t test_vfwcvt_rtz_x_f_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_rtz_x_f_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vfwcvt_rtz_xu_f_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_rtz_xu_f_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vfwcvt_rtz_xu_f_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_rtz_xu_f_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vfwcvt_rtz_xu_f_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_rtz_xu_f_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vfwcvt_rtz_xu_f_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_rtz_xu_f_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m8_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vfwcvt_rtz_x_f_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_rtz_x_f_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vfwcvt_rtz_x_f_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_rtz_x_f_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vfwcvt_rtz_x_f_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_rtz_x_f_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vfwcvt_rtz_x_f_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_rtz_x_f_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m4_tumu(vm, vd, vs2, vl); } -vint32m8_t test_vfwcvt_rtz_x_f_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_rtz_x_f_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfwcvt_rtz_xu_f_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_rtz_xu_f_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vfwcvt_rtz_xu_f_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_rtz_xu_f_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vfwcvt_rtz_xu_f_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_rtz_xu_f_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vfwcvt_rtz_xu_f_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_rtz_xu_f_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vfwcvt_rtz_xu_f_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_rtz_xu_f_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m8_tumu(vm, vd, vs2, vl); } -vint64m1_t test_vfwcvt_rtz_x_f_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_rtz_x_f_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m1_tumu(vm, vd, vs2, vl); } -vint64m2_t test_vfwcvt_rtz_x_f_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_rtz_x_f_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m2_tumu(vm, vd, vs2, vl); } -vint64m4_t test_vfwcvt_rtz_x_f_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_rtz_x_f_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m4_tumu(vm, vd, vs2, vl); } -vint64m8_t test_vfwcvt_rtz_x_f_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_rtz_x_f_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vfwcvt_rtz_xu_f_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_rtz_xu_f_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vfwcvt_rtz_xu_f_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_rtz_xu_f_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vfwcvt_rtz_xu_f_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_rtz_xu_f_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vfwcvt_rtz_xu_f_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_rtz_xu_f_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m8_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vfwcvt_rtz_x_f_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vint32mf2_t test_vfwcvt_rtz_x_f_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vfwcvt_rtz_x_f_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vint32m1_t test_vfwcvt_rtz_x_f_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vfwcvt_rtz_x_f_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vint32m2_t test_vfwcvt_rtz_x_f_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vfwcvt_rtz_x_f_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vint32m4_t test_vfwcvt_rtz_x_f_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m4_mu(vm, vd, vs2, vl); } -vint32m8_t test_vfwcvt_rtz_x_f_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vint32m8_t test_vfwcvt_rtz_x_f_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i32m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vfwcvt_rtz_xu_f_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vfloat16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vfwcvt_rtz_xu_f_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vfwcvt_rtz_xu_f_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vfloat16mf2_t vs2, size_t vl) { +vuint32m1_t test_vfwcvt_rtz_xu_f_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vfwcvt_rtz_xu_f_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vfloat16m1_t vs2, size_t vl) { +vuint32m2_t test_vfwcvt_rtz_xu_f_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vfwcvt_rtz_xu_f_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vfloat16m2_t vs2, size_t vl) { +vuint32m4_t test_vfwcvt_rtz_xu_f_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vfwcvt_rtz_xu_f_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vfloat16m4_t vs2, size_t vl) { +vuint32m8_t test_vfwcvt_rtz_xu_f_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u32m8_mu(vm, vd, vs2, vl); } -vint64m1_t test_vfwcvt_rtz_x_f_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vint64m1_t test_vfwcvt_rtz_x_f_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m1_mu(vm, vd, vs2, vl); } -vint64m2_t test_vfwcvt_rtz_x_f_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vint64m2_t test_vfwcvt_rtz_x_f_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m2_mu(vm, vd, vs2, vl); } -vint64m4_t test_vfwcvt_rtz_x_f_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vint64m4_t test_vfwcvt_rtz_x_f_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m4_mu(vm, vd, vs2, vl); } -vint64m8_t test_vfwcvt_rtz_x_f_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vint64m8_t test_vfwcvt_rtz_x_f_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_x_f_v_i64m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vfwcvt_rtz_xu_f_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vfloat32mf2_t vs2, size_t vl) { +vuint64m1_t test_vfwcvt_rtz_xu_f_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vfwcvt_rtz_xu_f_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vfloat32m1_t vs2, size_t vl) { +vuint64m2_t test_vfwcvt_rtz_xu_f_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vfwcvt_rtz_xu_f_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vfloat32m2_t vs2, size_t vl) { +vuint64m4_t test_vfwcvt_rtz_xu_f_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vfwcvt_rtz_xu_f_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vfloat32m4_t vs2, size_t vl) { +vuint64m8_t test_vfwcvt_rtz_xu_f_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwcvt_rtz_xu_f_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c index 4be411c6e..11a7a3969 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c @@ -6,578 +6,841 @@ #include -vfloat32mf2_t test_vfwmacc_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmacc_vf_f32mf2_tu(vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vf_f32mf2_tu(vfloat32mf2_t vd, _Float16 vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmacc_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmacc_vf_f32m1_tu(vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vf_f32m1_tu(vfloat32m1_t vd, _Float16 vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmacc_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmacc_vf_f32m2_tu(vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vf_f32m2_tu(vfloat32m2_t vd, _Float16 vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmacc_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmacc_vf_f32m4_tu(vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vf_f32m4_tu(vfloat32m4_t vd, _Float16 vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmacc_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmacc_vf_f32m8_tu(vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vf_f32m8_tu(vfloat32m8_t vd, _Float16 vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32m8_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmacc_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmacc_vf_f64m1_tu(vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vf_f64m1_tu(vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmacc_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmacc_vf_f64m2_tu(vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vf_f64m2_tu(vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmacc_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmacc_vf_f64m4_tu(vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vf_f64m4_tu(vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmacc_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmacc_vf_f64m8_tu(vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vf_f64m8_tu(vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m8_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmacc_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmacc_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmacc_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmacc_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmacc_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmacc_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmacc_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmacc_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmacc_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmacc_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmacc_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmacc_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmacc_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmacc_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmacc_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmacc_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmacc_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmacc_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmacc_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmacc_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmacc_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmacc_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmacc_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmacc_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmacc_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmacc_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmacc_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmacc_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmacc_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmacc_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmacc_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmacc_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmacc_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmacc_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmacc_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmacc_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmacc_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmacc_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmacc_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmacc_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmacc_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmacc_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmacc_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmacc_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmacc_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmacc_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmacc_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmacc_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmacc_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmacc_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmacc_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmacc_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmacc_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmacc_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmacc_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmacc_vf_f32mf2_rm_tu(vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vf_f32mf2_rm_tu(vfloat32mf2_t vd, _Float16 vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmacc_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmacc_vf_f32m1_rm_tu(vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vf_f32m1_rm_tu(vfloat32m1_t vd, _Float16 vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmacc_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmacc_vf_f32m2_rm_tu(vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vf_f32m2_rm_tu(vfloat32m2_t vd, _Float16 vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmacc_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmacc_vf_f32m4_rm_tu(vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vf_f32m4_rm_tu(vfloat32m4_t vd, _Float16 vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmacc_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmacc_vf_f32m8_rm_tu(vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vf_f32m8_rm_tu(vfloat32m8_t vd, _Float16 vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmacc_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmacc_vf_f64m1_rm_tu(vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vf_f64m1_rm_tu(vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmacc_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmacc_vf_f64m2_rm_tu(vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vf_f64m2_rm_tu(vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmacc_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmacc_vf_f64m4_rm_tu(vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vf_f64m4_rm_tu(vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmacc_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmacc_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmacc_vf_f64m8_rm_tu(vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vf_f64m8_rm_tu(vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmacc_vf_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmacc_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmacc_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmacc_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmacc_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmacc_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmacc_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmacc_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmacc_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmacc_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmacc_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmacc_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmacc_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmacc_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmacc_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmacc_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmacc_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmacc_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmacc_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmacc_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmacc_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmacc_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmacc_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmacc_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmacc_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmacc_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmacc_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmacc_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmacc_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmacc_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmacc_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmacc_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmacc_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmacc_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmacc_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmacc_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmacc_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmacc_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmacc_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmacc_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmacc_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmacc_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmacc_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmacc_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmacc_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmacc_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmacc_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwmacc_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwmacc_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwmacc_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwmacc_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwmacc_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwmacc_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwmacc_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwmacc_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwmacc_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwmacc_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwmacc_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwmacc_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwmacc_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwmacc_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwmacc_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwmacc_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwmacc_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwmacc_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwmacc_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwmacc_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwmacc_vf_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwmacc_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwmacc_vf_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmacc_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmacc_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmacc_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmacc_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmacc_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmacc_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmacc_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmacc_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmacc_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmacc_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmacc_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmacc_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmacc_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmacc_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmacc_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmacc_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmacc_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmacc_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmacc_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmacc_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmacc_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmacc_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmacc_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmacc_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmacc_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmacc_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmacc_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmacc_vf_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c index 40e33de36..3907f0d30 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c @@ -6,578 +6,841 @@ #include -vfloat32mf2_t test_vfwmsac_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmsac_vf_f32mf2_tu(vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vf_f32mf2_tu(vfloat32mf2_t vd, _Float16 vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmsac_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmsac_vf_f32m1_tu(vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vf_f32m1_tu(vfloat32m1_t vd, _Float16 vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmsac_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmsac_vf_f32m2_tu(vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vf_f32m2_tu(vfloat32m2_t vd, _Float16 vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmsac_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmsac_vf_f32m4_tu(vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vf_f32m4_tu(vfloat32m4_t vd, _Float16 vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmsac_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmsac_vf_f32m8_tu(vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vf_f32m8_tu(vfloat32m8_t vd, _Float16 vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32m8_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmsac_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmsac_vf_f64m1_tu(vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vf_f64m1_tu(vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmsac_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmsac_vf_f64m2_tu(vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vf_f64m2_tu(vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmsac_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmsac_vf_f64m4_tu(vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vf_f64m4_tu(vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmsac_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmsac_vf_f64m8_tu(vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vf_f64m8_tu(vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m8_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmsac_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmsac_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmsac_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmsac_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmsac_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmsac_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmsac_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmsac_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmsac_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmsac_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmsac_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmsac_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmsac_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmsac_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmsac_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmsac_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmsac_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmsac_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmsac_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmsac_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmsac_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmsac_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmsac_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmsac_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmsac_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmsac_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmsac_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmsac_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmsac_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmsac_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmsac_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmsac_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmsac_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmsac_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmsac_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmsac_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmsac_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmsac_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmsac_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmsac_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmsac_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmsac_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmsac_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmsac_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmsac_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmsac_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmsac_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwmsac_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmsac_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwmsac_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmsac_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwmsac_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmsac_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwmsac_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmsac_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmsac_vf_f32mf2_rm_tu(vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vf_f32mf2_rm_tu(vfloat32mf2_t vd, _Float16 vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmsac_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmsac_vf_f32m1_rm_tu(vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vf_f32m1_rm_tu(vfloat32m1_t vd, _Float16 vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmsac_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmsac_vf_f32m2_rm_tu(vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vf_f32m2_rm_tu(vfloat32m2_t vd, _Float16 vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmsac_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmsac_vf_f32m4_rm_tu(vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vf_f32m4_rm_tu(vfloat32m4_t vd, _Float16 vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmsac_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmsac_vf_f32m8_rm_tu(vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vf_f32m8_rm_tu(vfloat32m8_t vd, _Float16 vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmsac_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmsac_vf_f64m1_rm_tu(vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vf_f64m1_rm_tu(vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmsac_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmsac_vf_f64m2_rm_tu(vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vf_f64m2_rm_tu(vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmsac_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmsac_vf_f64m4_rm_tu(vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vf_f64m4_rm_tu(vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmsac_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmsac_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmsac_vf_f64m8_rm_tu(vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vf_f64m8_rm_tu(vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwmsac_vf_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmsac_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmsac_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmsac_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmsac_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmsac_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmsac_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmsac_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmsac_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmsac_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmsac_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmsac_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmsac_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmsac_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmsac_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmsac_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmsac_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmsac_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmsac_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmsac_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmsac_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmsac_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmsac_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmsac_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmsac_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmsac_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmsac_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmsac_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmsac_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmsac_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmsac_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmsac_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmsac_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmsac_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmsac_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmsac_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmsac_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmsac_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmsac_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmsac_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmsac_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmsac_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmsac_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmsac_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmsac_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmsac_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmsac_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwmsac_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwmsac_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwmsac_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwmsac_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwmsac_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwmsac_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwmsac_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwmsac_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwmsac_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwmsac_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwmsac_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwmsac_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwmsac_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwmsac_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwmsac_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwmsac_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwmsac_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwmsac_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwmsac_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwmsac_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwmsac_vf_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwmsac_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwmsac_vf_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmsac_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmsac_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmsac_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmsac_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmsac_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmsac_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmsac_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmsac_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmsac_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmsac_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmsac_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmsac_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmsac_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmsac_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmsac_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmsac_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmsac_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwmsac_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmsac_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmsac_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwmsac_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmsac_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmsac_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwmsac_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmsac_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmsac_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwmsac_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwmsac_vf_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwmul.c b/auto-generated/policy_funcs/llvm-api-tests/vfwmul.c index 54141a997..4cde11fc2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwmul.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwmul.c @@ -6,578 +6,821 @@ #include -vfloat32mf2_t test_vfwmul_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwmul_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwmul_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwmul_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwmul_vf_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwmul_vf_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwmul_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwmul_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwmul_vf_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwmul_vf_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwmul_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwmul_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwmul_vf_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwmul_vf_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwmul_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwmul_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwmul_vf_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwmul_vf_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwmul_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwmul_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwmul_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwmul_vf_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwmul_vf_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwmul_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwmul_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwmul_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwmul_vf_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwmul_vf_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwmul_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwmul_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwmul_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwmul_vf_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwmul_vf_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwmul_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwmul_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwmul_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwmul_vf_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwmul_vf_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwmul_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwmul_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwmul_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwmul_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwmul_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwmul_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwmul_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwmul_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwmul_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwmul_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwmul_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwmul_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwmul_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwmul_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwmul_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwmul_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwmul_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwmul_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwmul_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwmul_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwmul_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwmul_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwmul_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwmul_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwmul_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwmul_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwmul_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwmul_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwmul_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwmul_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwmul_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwmul_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwmul_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwmul_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwmul_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwmul_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwmul_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwmul_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwmul_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwmul_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwmul_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwmul_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwmul_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwmul_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwmul_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwmul_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwmul_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwmul_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwmul_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwmul_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwmul_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwmul_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwmul_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwmul_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwmul_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwmul_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwmul_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwmul_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwmul_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwmul_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwmul_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwmul_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwmul_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwmul_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwmul_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwmul_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwmul_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwmul_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwmul_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwmul_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwmul_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwmul_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwmul_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwmul_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwmul_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwmul_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwmul_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwmul_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwmul_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwmul_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwmul_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwmul_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwmul_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwmul_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwmul_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwmul_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwmul_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwmul_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwmul_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwmul_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwmul_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwmul_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwmul_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwmul_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwmul_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwmul_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwmul_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwmul_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwmul_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwmul_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwmul_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwmul_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwmul_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmul_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmul_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwmul_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmul_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwmul_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmul_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwmul_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmul_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwmul_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmul_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwmul_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmul_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwmul_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmul_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwmul_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwmul_vv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmul_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwmul_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwmul_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmul_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwmul_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwmul_vv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmul_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwmul_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmul_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwmul_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwmul_vv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmul_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwmul_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmul_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwmul_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwmul_vv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmul_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwmul_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmul_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwmul_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwmul_vv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmul_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwmul_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwmul_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmul_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmul_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmul_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwmul_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmul_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwmul_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmul_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwmul_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmul_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwmul_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmul_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwmul_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmul_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwmul_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmul_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwmul_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmul_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwmul_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmul_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwmul_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmul_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwmul_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmul_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwmul_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmul_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwmul_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmul_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwmul_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmul_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwmul_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmul_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwmul_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmul_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwmul_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmul_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { - return __riscv_vfwmul_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmul_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { + return __riscv_vfwmul_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmul_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfwmul_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmul_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfwmul_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmul_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwmul_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmul_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwmul_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmul_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwmul_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmul_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwmul_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmul_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwmul_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmul_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwmul_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmul_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwmul_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmul_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwmul_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmul_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwmul_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmul_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwmul_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmul_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwmul_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmul_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwmul_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmul_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwmul_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmul_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwmul_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmul_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwmul_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmul_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwmul_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmul_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmul_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwmul_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmul_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwmul_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmul_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwmul_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmul_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwmul_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmul_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwmul_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmul_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwmul_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmul_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwmul_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmul_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwmul_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmul_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwmul_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwmul_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmul_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwmul_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwmul_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwmul_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmul_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwmul_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwmul_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwmul_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmul_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwmul_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwmul_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwmul_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmul_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwmul_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwmul_vv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwmul_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwmul_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwmul_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c index 8d13aa0ee..77ae1e831 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c @@ -6,578 +6,863 @@ #include -vfloat32mf2_t test_vfwnmacc_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmacc_vf_f32mf2_tu(vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vf_f32mf2_tu(vfloat32mf2_t vd, _Float16 vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmacc_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmacc_vf_f32m1_tu(vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vf_f32m1_tu(vfloat32m1_t vd, _Float16 vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmacc_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmacc_vf_f32m2_tu(vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vf_f32m2_tu(vfloat32m2_t vd, _Float16 vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmacc_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmacc_vf_f32m4_tu(vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vf_f32m4_tu(vfloat32m4_t vd, _Float16 vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmacc_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmacc_vf_f32m8_tu(vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vf_f32m8_tu(vfloat32m8_t vd, _Float16 vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32m8_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmacc_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmacc_vf_f64m1_tu(vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vf_f64m1_tu(vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmacc_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmacc_vf_f64m2_tu(vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vf_f64m2_tu(vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmacc_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmacc_vf_f64m4_tu(vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vf_f64m4_tu(vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmacc_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmacc_vf_f64m8_tu(vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vf_f64m8_tu(vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m8_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmacc_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmacc_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmacc_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmacc_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmacc_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmacc_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmacc_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmacc_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmacc_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmacc_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmacc_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmacc_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmacc_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmacc_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmacc_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmacc_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmacc_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmacc_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmacc_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmacc_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmacc_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmacc_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmacc_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmacc_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmacc_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmacc_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmacc_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmacc_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmacc_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmacc_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmacc_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmacc_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmacc_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmacc_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmacc_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmacc_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmacc_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmacc_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmacc_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmacc_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmacc_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmacc_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmacc_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmacc_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmacc_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmacc_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmacc_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmacc_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmacc_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmacc_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmacc_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmacc_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmacc_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmacc_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmacc_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwnmacc_vf_f32mf2_rm_tu(vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmacc_vf_f32mf2_rm_tu(vfloat32mf2_t vd, _Float16 vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwnmacc_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwnmacc_vf_f32m1_rm_tu(vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vf_f32m1_rm_tu(vfloat32m1_t vd, _Float16 vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwnmacc_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwnmacc_vf_f32m2_rm_tu(vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vf_f32m2_rm_tu(vfloat32m2_t vd, _Float16 vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwnmacc_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwnmacc_vf_f32m4_rm_tu(vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vf_f32m4_rm_tu(vfloat32m4_t vd, _Float16 vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwnmacc_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwnmacc_vf_f32m8_rm_tu(vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vf_f32m8_rm_tu(vfloat32m8_t vd, _Float16 vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwnmacc_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwnmacc_vf_f64m1_rm_tu(vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vf_f64m1_rm_tu(vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwnmacc_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwnmacc_vf_f64m2_rm_tu(vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vf_f64m2_rm_tu(vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwnmacc_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwnmacc_vf_f64m4_rm_tu(vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vf_f64m4_rm_tu(vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwnmacc_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwnmacc_vf_f64m8_rm_tu(vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vf_f64m8_rm_tu(vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmacc_vf_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwnmacc_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmacc_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwnmacc_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmacc_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmacc_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmacc_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwnmacc_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmacc_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwnmacc_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwnmacc_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwnmacc_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwnmacc_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwnmacc_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwnmacc_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwnmacc_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwnmacc_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwnmacc_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwnmacc_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwnmacc_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwnmacc_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwnmacc_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwnmacc_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwnmacc_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwnmacc_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwnmacc_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwnmacc_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwnmacc_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwnmacc_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwnmacc_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwnmacc_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwnmacc_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwnmacc_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwnmacc_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwnmacc_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwnmacc_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwnmacc_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwnmacc_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmacc_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmacc_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwnmacc_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmacc_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmacc_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmacc_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwnmacc_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwnmacc_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmacc_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwnmacc_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwnmacc_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwnmacc_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwnmacc_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwnmacc_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwnmacc_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwnmacc_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwnmacc_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwnmacc_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwnmacc_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwnmacc_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwnmacc_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwnmacc_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwnmacc_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwnmacc_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfwnmacc_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwnmacc_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwnmacc_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwnmacc_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwnmacc_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwnmacc_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwnmacc_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwnmacc_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwnmacc_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwnmacc_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwnmacc_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwnmacc_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwnmacc_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwnmacc_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwnmacc_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmacc_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmacc_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwnmacc_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmacc_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmacc_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmacc_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwnmacc_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmacc_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwnmacc_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmacc_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwnmacc_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwnmacc_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmacc_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwnmacc_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwnmacc_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmacc_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwnmacc_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwnmacc_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmacc_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwnmacc_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwnmacc_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmacc_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwnmacc_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwnmacc_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmacc_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwnmacc_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwnmacc_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmacc_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwnmacc_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwnmacc_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmacc_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmacc_vf_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c index 0f9865acb..0b2811533 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c @@ -6,578 +6,863 @@ #include -vfloat32mf2_t test_vfwnmsac_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmsac_vf_f32mf2_tu(vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vf_f32mf2_tu(vfloat32mf2_t vd, _Float16 vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmsac_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmsac_vf_f32m1_tu(vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vf_f32m1_tu(vfloat32m1_t vd, _Float16 vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmsac_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmsac_vf_f32m2_tu(vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vf_f32m2_tu(vfloat32m2_t vd, _Float16 vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmsac_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmsac_vf_f32m4_tu(vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vf_f32m4_tu(vfloat32m4_t vd, _Float16 vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmsac_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmsac_vf_f32m8_tu(vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vf_f32m8_tu(vfloat32m8_t vd, _Float16 vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32m8_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmsac_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmsac_vf_f64m1_tu(vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vf_f64m1_tu(vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m1_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmsac_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmsac_vf_f64m2_tu(vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vf_f64m2_tu(vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m2_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmsac_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmsac_vf_f64m4_tu(vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vf_f64m4_tu(vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m4_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmsac_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f64m8_tu(vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmsac_vf_f64m8_tu(vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vf_f64m8_tu(vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m8_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmsac_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmsac_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmsac_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmsac_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmsac_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmsac_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmsac_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmsac_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmsac_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmsac_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmsac_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmsac_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m1_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmsac_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmsac_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m2_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmsac_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmsac_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m4_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmsac_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmsac_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmsac_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmsac_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmsac_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmsac_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmsac_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmsac_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmsac_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmsac_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmsac_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmsac_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmsac_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmsac_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmsac_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmsac_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmsac_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmsac_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmsac_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmsac_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmsac_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmsac_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmsac_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwnmsac_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmsac_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwnmsac_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmsac_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwnmsac_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmsac_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwnmsac_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmsac_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m1_t test_vfwnmsac_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m1_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmsac_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m2_t test_vfwnmsac_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m2_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmsac_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m4_t test_vfwnmsac_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m4_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmsac_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat64m8_t test_vfwnmsac_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwnmsac_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwnmsac_vf_f32mf2_rm_tu(vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwnmsac_vf_f32mf2_rm_tu(vfloat32mf2_t vd, _Float16 vs1, + vfloat16mf4_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwnmsac_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwnmsac_vf_f32m1_rm_tu(vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vf_f32m1_rm_tu(vfloat32m1_t vd, _Float16 vs1, + vfloat16mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwnmsac_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwnmsac_vf_f32m2_rm_tu(vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vf_f32m2_rm_tu(vfloat32m2_t vd, _Float16 vs1, + vfloat16m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwnmsac_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwnmsac_vf_f32m4_rm_tu(vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vf_f32m4_rm_tu(vfloat32m4_t vd, _Float16 vs1, + vfloat16m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwnmsac_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwnmsac_vf_f32m8_rm_tu(vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vf_f32m8_rm_tu(vfloat32m8_t vd, _Float16 vs1, + vfloat16m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwnmsac_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwnmsac_vf_f64m1_rm_tu(vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vf_f64m1_rm_tu(vfloat64m1_t vd, float vs1, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwnmsac_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwnmsac_vf_f64m2_rm_tu(vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vf_f64m2_rm_tu(vfloat64m2_t vd, float vs1, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwnmsac_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwnmsac_vf_f64m4_rm_tu(vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vf_f64m4_rm_tu(vfloat64m4_t vd, float vs1, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwnmsac_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vv_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwnmsac_vf_f64m8_rm_tu(vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vf_f64m8_rm_tu(vfloat64m8_t vd, float vs1, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfwnmsac_vf_f64m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwnmsac_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmsac_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwnmsac_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmsac_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmsac_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmsac_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwnmsac_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmsac_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwnmsac_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwnmsac_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwnmsac_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwnmsac_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwnmsac_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwnmsac_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwnmsac_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwnmsac_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwnmsac_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwnmsac_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwnmsac_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwnmsac_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwnmsac_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwnmsac_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwnmsac_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwnmsac_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwnmsac_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f64m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwnmsac_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwnmsac_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwnmsac_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwnmsac_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f64m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwnmsac_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwnmsac_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwnmsac_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwnmsac_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f64m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwnmsac_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwnmsac_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwnmsac_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwnmsac_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f64m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmsac_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmsac_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwnmsac_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmsac_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmsac_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmsac_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwnmsac_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, + vfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwnmsac_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmsac_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwnmsac_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwnmsac_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwnmsac_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwnmsac_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwnmsac_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwnmsac_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwnmsac_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwnmsac_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwnmsac_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwnmsac_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwnmsac_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwnmsac_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwnmsac_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwnmsac_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwnmsac_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, + vfloat32mf2_t vs2, size_t vl) { + return __riscv_vfwnmsac_vv_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwnmsac_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwnmsac_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f64m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwnmsac_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwnmsac_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m2_t test_vfwnmsac_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m2_t test_vfwnmsac_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f64m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwnmsac_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwnmsac_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m4_t test_vfwnmsac_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m4_t test_vfwnmsac_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f64m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwnmsac_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwnmsac_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vv_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat64m8_t test_vfwnmsac_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat64m8_t test_vfwnmsac_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f64m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmsac_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmsac_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs1, + vfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwnmsac_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwnmsac_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, _Float16 vs1, vfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwnmsac_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwnmsac_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + _Float16 vs1, vfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwnmsac_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwnmsac_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwnmsac_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, _Float16 vs1, vfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwnmsac_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + _Float16 vs1, vfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwnmsac_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwnmsac_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, _Float16 vs1, vfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwnmsac_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + _Float16 vs1, vfloat16m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwnmsac_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwnmsac_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, _Float16 vs1, vfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwnmsac_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + _Float16 vs1, vfloat16m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwnmsac_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwnmsac_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, _Float16 vs1, vfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwnmsac_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + _Float16 vs1, vfloat16m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwnmsac_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwnmsac_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, float vs1, vfloat32mf2_t vs2, size_t vl) { +vfloat64m1_t test_vfwnmsac_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + float vs1, vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwnmsac_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwnmsac_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, float vs1, vfloat32m1_t vs2, size_t vl) { +vfloat64m2_t test_vfwnmsac_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + float vs1, vfloat32m1_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwnmsac_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwnmsac_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, float vs1, vfloat32m2_t vs2, size_t vl) { +vfloat64m4_t test_vfwnmsac_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + float vs1, vfloat32m2_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwnmsac_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vv_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwnmsac_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, float vs1, vfloat32m4_t vs2, size_t vl) { +vfloat64m8_t test_vfwnmsac_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + float vs1, vfloat32m4_t vs2, + size_t vl) { return __riscv_vfwnmsac_vf_f64m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c b/auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c index 9b8389dbf..ddcf526c2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c @@ -6,178 +6,308 @@ #include -vfloat32m1_t test_vfwredosum_vs_f16mf4_f32m1_tu(vfloat32m1_t vd, vfloat16mf4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16mf4_f32m1_tu(vfloat32m1_t vd, + vfloat16mf4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16mf4_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16mf2_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16mf2_f32m1_tu(vfloat32m1_t vd, + vfloat16mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16mf2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m1_f32m1_tu(vfloat32m1_t vd, vfloat16m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16m1_f32m1_tu(vfloat32m1_t vd, + vfloat16m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16m1_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m2_f32m1_tu(vfloat32m1_t vd, vfloat16m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16m2_f32m1_tu(vfloat32m1_t vd, + vfloat16m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16m2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m4_f32m1_tu(vfloat32m1_t vd, vfloat16m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16m4_f32m1_tu(vfloat32m1_t vd, + vfloat16m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16m4_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m8_f32m1_tu(vfloat32m1_t vd, vfloat16m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16m8_f32m1_tu(vfloat32m1_t vd, + vfloat16m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16m8_f32m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32mf2_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32mf2_f64m1_tu(vfloat64m1_t vd, + vfloat32mf2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32mf2_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32m1_f64m1_tu(vfloat64m1_t vd, vfloat32m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32m1_f64m1_tu(vfloat64m1_t vd, + vfloat32m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32m1_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32m2_f64m1_tu(vfloat64m1_t vd, vfloat32m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32m2_f64m1_tu(vfloat64m1_t vd, + vfloat32m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32m2_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32m4_f64m1_tu(vfloat64m1_t vd, vfloat32m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32m4_f64m1_tu(vfloat64m1_t vd, + vfloat32m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32m4_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32m8_f64m1_tu(vfloat64m1_t vd, vfloat32m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32m8_f64m1_tu(vfloat64m1_t vd, + vfloat32m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32m8_f64m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16mf4_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, vfloat16mf4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16mf4_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, + vfloat16mf4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16mf4_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16mf2_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16mf2_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16mf2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m1_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, vfloat16m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16m1_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, + vfloat16m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16m1_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m2_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, vfloat16m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16m2_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat16m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16m2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m4_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, vfloat16m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16m4_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat16m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16m4_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m8_f32m1_tum(vbool2_t vm, vfloat32m1_t vd, vfloat16m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredosum_vs_f16m8_f32m1_tum(vbool2_t vm, vfloat32m1_t vd, + vfloat16m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f16m8_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32mf2_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32mf2_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32mf2_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32m1_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, vfloat32m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32m1_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, + vfloat32m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32m1_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32m2_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, vfloat32m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32m2_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, + vfloat32m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32m2_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32m4_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, vfloat32m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32m4_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat32m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32m4_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredosum_vs_f32m8_f64m1_tum(vbool4_t vm, vfloat64m1_t vd, vfloat32m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredosum_vs_f32m8_f64m1_tum(vbool4_t vm, vfloat64m1_t vd, + vfloat32m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredosum_vs_f32m8_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredosum_vs_f16mf4_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16mf4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16mf4_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16mf4_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f16mf4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredosum_vs_f16mf2_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16mf2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16mf2_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16mf2_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f16mf2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredosum_vs_f16m1_f32m1_rm_tu(vfloat32m1_t vd, vfloat16m1_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16m1_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16m1_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16m1_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfwredosum_vs_f16m1_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredosum_vs_f16m2_f32m1_rm_tu(vfloat32m1_t vd, vfloat16m2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16m2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16m2_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16m2_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfwredosum_vs_f16m2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredosum_vs_f16m4_f32m1_rm_tu(vfloat32m1_t vd, vfloat16m4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16m4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16m4_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16m4_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfwredosum_vs_f16m4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredosum_vs_f16m8_f32m1_rm_tu(vfloat32m1_t vd, vfloat16m8_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16m8_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16m8_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16m8_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfwredosum_vs_f16m8_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredosum_vs_f32mf2_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32mf2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredosum_vs_f32mf2_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32mf2_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f32mf2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredosum_vs_f32m1_f64m1_rm_tu(vfloat64m1_t vd, vfloat32m1_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32m1_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredosum_vs_f32m1_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32m1_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfwredosum_vs_f32m1_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredosum_vs_f32m2_f64m1_rm_tu(vfloat64m1_t vd, vfloat32m2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32m2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredosum_vs_f32m2_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32m2_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfwredosum_vs_f32m2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredosum_vs_f32m4_f64m1_rm_tu(vfloat64m1_t vd, vfloat32m4_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32m4_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredosum_vs_f32m4_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32m4_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfwredosum_vs_f32m4_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredosum_vs_f32m8_f64m1_rm_tu(vfloat64m1_t vd, vfloat32m8_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32m8_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredosum_vs_f32m8_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32m8_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfwredosum_vs_f32m8_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredosum_vs_f16mf4_f32m1_rm_tum(vbool64_t vm, vfloat32m1_t vd, vfloat16mf4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16mf4_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16mf4_f32m1_rm_tum(vbool64_t vm, + vfloat32m1_t vd, + vfloat16mf4_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f16mf4_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredosum_vs_f16mf2_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16mf2_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16mf2_f32m1_rm_tum(vbool32_t vm, + vfloat32m1_t vd, + vfloat16mf2_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f16mf2_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m1_f32m1_rm_tum(vbool16_t vm, vfloat32m1_t vd, vfloat16m1_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16m1_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16m1_f32m1_rm_tum(vbool16_t vm, + vfloat32m1_t vd, + vfloat16m1_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f16m1_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m2_f32m1_rm_tum(vbool8_t vm, vfloat32m1_t vd, vfloat16m2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16m2_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16m2_f32m1_rm_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat16m2_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f16m2_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m4_f32m1_rm_tum(vbool4_t vm, vfloat32m1_t vd, vfloat16m4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16m4_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16m4_f32m1_rm_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat16m4_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f16m4_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredosum_vs_f16m8_f32m1_rm_tum(vbool2_t vm, vfloat32m1_t vd, vfloat16m8_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f16m8_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredosum_vs_f16m8_f32m1_rm_tum(vbool2_t vm, vfloat32m1_t vd, + vfloat16m8_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f16m8_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); +} + +vfloat64m1_t test_vfwredosum_vs_f32mf2_f64m1_rm_tum(vbool64_t vm, + vfloat64m1_t vd, + vfloat32mf2_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f32mf2_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); +} + +vfloat64m1_t test_vfwredosum_vs_f32m1_f64m1_rm_tum(vbool32_t vm, + vfloat64m1_t vd, + vfloat32m1_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f32m1_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); +} + +vfloat64m1_t test_vfwredosum_vs_f32m2_f64m1_rm_tum(vbool16_t vm, + vfloat64m1_t vd, + vfloat32m2_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f32m2_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwredosum_vs_f32mf2_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32mf2_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); -} - -vfloat64m1_t test_vfwredosum_vs_f32m1_f64m1_rm_tum(vbool32_t vm, vfloat64m1_t vd, vfloat32m1_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32m1_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); -} - -vfloat64m1_t test_vfwredosum_vs_f32m2_f64m1_rm_tum(vbool16_t vm, vfloat64m1_t vd, vfloat32m2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32m2_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); -} - -vfloat64m1_t test_vfwredosum_vs_f32m4_f64m1_rm_tum(vbool8_t vm, vfloat64m1_t vd, vfloat32m4_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32m4_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); -} - -vfloat64m1_t test_vfwredosum_vs_f32m8_f64m1_rm_tum(vbool4_t vm, vfloat64m1_t vd, vfloat32m8_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredosum_vs_f32m8_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredosum_vs_f32m4_f64m1_rm_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat32m4_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f32m4_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); +} + +vfloat64m1_t test_vfwredosum_vs_f32m8_f64m1_rm_tum(vbool4_t vm, vfloat64m1_t vd, + vfloat32m8_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredosum_vs_f32m8_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c b/auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c index 442506448..e21d165dd 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c @@ -6,178 +6,308 @@ #include -vfloat32m1_t test_vfwredusum_vs_f16mf4_f32m1_tu(vfloat32m1_t vd, vfloat16mf4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16mf4_f32m1_tu(vfloat32m1_t vd, + vfloat16mf4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16mf4_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16mf2_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16mf2_f32m1_tu(vfloat32m1_t vd, + vfloat16mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16mf2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m1_f32m1_tu(vfloat32m1_t vd, vfloat16m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16m1_f32m1_tu(vfloat32m1_t vd, + vfloat16m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16m1_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m2_f32m1_tu(vfloat32m1_t vd, vfloat16m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16m2_f32m1_tu(vfloat32m1_t vd, + vfloat16m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16m2_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m4_f32m1_tu(vfloat32m1_t vd, vfloat16m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16m4_f32m1_tu(vfloat32m1_t vd, + vfloat16m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16m4_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m8_f32m1_tu(vfloat32m1_t vd, vfloat16m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16m8_f32m1_tu(vfloat32m1_t vd, + vfloat16m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16m8_f32m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32mf2_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32mf2_f64m1_tu(vfloat64m1_t vd, + vfloat32mf2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32mf2_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32m1_f64m1_tu(vfloat64m1_t vd, vfloat32m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32m1_f64m1_tu(vfloat64m1_t vd, + vfloat32m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32m1_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32m2_f64m1_tu(vfloat64m1_t vd, vfloat32m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32m2_f64m1_tu(vfloat64m1_t vd, + vfloat32m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32m2_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32m4_f64m1_tu(vfloat64m1_t vd, vfloat32m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32m4_f64m1_tu(vfloat64m1_t vd, + vfloat32m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32m4_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32m8_f64m1_tu(vfloat64m1_t vd, vfloat32m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32m8_f64m1_tu(vfloat64m1_t vd, + vfloat32m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32m8_f64m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16mf4_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, vfloat16mf4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16mf4_f32m1_tum(vbool64_t vm, vfloat32m1_t vd, + vfloat16mf4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16mf4_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16mf2_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16mf2_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16mf2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m1_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, vfloat16m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16m1_f32m1_tum(vbool16_t vm, vfloat32m1_t vd, + vfloat16m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16m1_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m2_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, vfloat16m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16m2_f32m1_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat16m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16m2_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m4_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, vfloat16m4_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16m4_f32m1_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat16m4_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16m4_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m8_f32m1_tum(vbool2_t vm, vfloat32m1_t vd, vfloat16m8_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vfwredusum_vs_f16m8_f32m1_tum(vbool2_t vm, vfloat32m1_t vd, + vfloat16m8_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f16m8_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32mf2_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32mf2_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32mf2_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32m1_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, vfloat32m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32m1_f64m1_tum(vbool32_t vm, vfloat64m1_t vd, + vfloat32m1_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32m1_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32m2_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, vfloat32m2_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32m2_f64m1_tum(vbool16_t vm, vfloat64m1_t vd, + vfloat32m2_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32m2_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32m4_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, vfloat32m4_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32m4_f64m1_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat32m4_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32m4_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwredusum_vs_f32m8_f64m1_tum(vbool4_t vm, vfloat64m1_t vd, vfloat32m8_t vs2, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vfwredusum_vs_f32m8_f64m1_tum(vbool4_t vm, vfloat64m1_t vd, + vfloat32m8_t vs2, + vfloat64m1_t vs1, size_t vl) { return __riscv_vfwredusum_vs_f32m8_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwredusum_vs_f16mf4_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16mf4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16mf4_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16mf4_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f16mf4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredusum_vs_f16mf2_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16mf2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16mf2_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16mf2_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f16mf2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredusum_vs_f16m1_f32m1_rm_tu(vfloat32m1_t vd, vfloat16m1_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16m1_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16m1_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16m1_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfwredusum_vs_f16m1_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredusum_vs_f16m2_f32m1_rm_tu(vfloat32m1_t vd, vfloat16m2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16m2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16m2_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16m2_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfwredusum_vs_f16m2_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredusum_vs_f16m4_f32m1_rm_tu(vfloat32m1_t vd, vfloat16m4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16m4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16m4_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16m4_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfwredusum_vs_f16m4_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredusum_vs_f16m8_f32m1_rm_tu(vfloat32m1_t vd, vfloat16m8_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16m8_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16m8_f32m1_rm_tu(vfloat32m1_t vd, + vfloat16m8_t vs2, + vfloat32m1_t vs1, size_t vl) { + return __riscv_vfwredusum_vs_f16m8_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredusum_vs_f32mf2_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32mf2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredusum_vs_f32mf2_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32mf2_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f32mf2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredusum_vs_f32m1_f64m1_rm_tu(vfloat64m1_t vd, vfloat32m1_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32m1_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredusum_vs_f32m1_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32m1_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfwredusum_vs_f32m1_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredusum_vs_f32m2_f64m1_rm_tu(vfloat64m1_t vd, vfloat32m2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32m2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredusum_vs_f32m2_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32m2_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfwredusum_vs_f32m2_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredusum_vs_f32m4_f64m1_rm_tu(vfloat64m1_t vd, vfloat32m4_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32m4_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredusum_vs_f32m4_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32m4_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfwredusum_vs_f32m4_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat64m1_t test_vfwredusum_vs_f32m8_f64m1_rm_tu(vfloat64m1_t vd, vfloat32m8_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32m8_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredusum_vs_f32m8_f64m1_rm_tu(vfloat64m1_t vd, + vfloat32m8_t vs2, + vfloat64m1_t vs1, size_t vl) { + return __riscv_vfwredusum_vs_f32m8_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwredusum_vs_f16mf4_f32m1_rm_tum(vbool64_t vm, vfloat32m1_t vd, vfloat16mf4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16mf4_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16mf4_f32m1_rm_tum(vbool64_t vm, + vfloat32m1_t vd, + vfloat16mf4_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f16mf4_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredusum_vs_f16mf2_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16mf2_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16mf2_f32m1_rm_tum(vbool32_t vm, + vfloat32m1_t vd, + vfloat16mf2_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f16mf2_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m1_f32m1_rm_tum(vbool16_t vm, vfloat32m1_t vd, vfloat16m1_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16m1_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16m1_f32m1_rm_tum(vbool16_t vm, + vfloat32m1_t vd, + vfloat16m1_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f16m1_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m2_f32m1_rm_tum(vbool8_t vm, vfloat32m1_t vd, vfloat16m2_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16m2_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16m2_f32m1_rm_tum(vbool8_t vm, vfloat32m1_t vd, + vfloat16m2_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f16m2_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m4_f32m1_rm_tum(vbool4_t vm, vfloat32m1_t vd, vfloat16m4_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16m4_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16m4_f32m1_rm_tum(vbool4_t vm, vfloat32m1_t vd, + vfloat16m4_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f16m4_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwredusum_vs_f16m8_f32m1_rm_tum(vbool2_t vm, vfloat32m1_t vd, vfloat16m8_t vs2, vfloat32m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f16m8_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwredusum_vs_f16m8_f32m1_rm_tum(vbool2_t vm, vfloat32m1_t vd, + vfloat16m8_t vs2, + vfloat32m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f16m8_f32m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); +} + +vfloat64m1_t test_vfwredusum_vs_f32mf2_f64m1_rm_tum(vbool64_t vm, + vfloat64m1_t vd, + vfloat32mf2_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f32mf2_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); +} + +vfloat64m1_t test_vfwredusum_vs_f32m1_f64m1_rm_tum(vbool32_t vm, + vfloat64m1_t vd, + vfloat32m1_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f32m1_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); +} + +vfloat64m1_t test_vfwredusum_vs_f32m2_f64m1_rm_tum(vbool16_t vm, + vfloat64m1_t vd, + vfloat32m2_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f32m2_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwredusum_vs_f32mf2_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32mf2_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); -} - -vfloat64m1_t test_vfwredusum_vs_f32m1_f64m1_rm_tum(vbool32_t vm, vfloat64m1_t vd, vfloat32m1_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32m1_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); -} - -vfloat64m1_t test_vfwredusum_vs_f32m2_f64m1_rm_tum(vbool16_t vm, vfloat64m1_t vd, vfloat32m2_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32m2_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); -} - -vfloat64m1_t test_vfwredusum_vs_f32m4_f64m1_rm_tum(vbool8_t vm, vfloat64m1_t vd, vfloat32m4_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32m4_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); -} - -vfloat64m1_t test_vfwredusum_vs_f32m8_f64m1_rm_tum(vbool4_t vm, vfloat64m1_t vd, vfloat32m8_t vs2, vfloat64m1_t vs1, size_t vl) { - return __riscv_vfwredusum_vs_f32m8_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat64m1_t test_vfwredusum_vs_f32m4_f64m1_rm_tum(vbool8_t vm, vfloat64m1_t vd, + vfloat32m4_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f32m4_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); +} + +vfloat64m1_t test_vfwredusum_vs_f32m8_f64m1_rm_tum(vbool4_t vm, vfloat64m1_t vd, + vfloat32m8_t vs2, + vfloat64m1_t vs1, + size_t vl) { + return __riscv_vfwredusum_vs_f32m8_f64m1_rm_tum(vm, vd, vs2, vs1, + __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfwsub.c index a192eae08..c02377035 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwsub.c @@ -6,1154 +6,1639 @@ #include -vfloat32mf2_t test_vfwsub_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwsub_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vf_f32mf2_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwsub_wv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwsub_wf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wf_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwsub_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_vv_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwsub_vf_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_vf_f32m1_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwsub_wv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_wv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwsub_wf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_wf_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwsub_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_vv_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwsub_vf_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_vf_f32m2_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwsub_wv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_wv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwsub_wf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_wf_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwsub_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_vv_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwsub_vf_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_vf_f32m4_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwsub_wv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_wv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwsub_wf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_wf_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwsub_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_vv_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwsub_vf_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_vf_f32m8_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwsub_wv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_wv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwsub_wf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_wf_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwsub_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_vv_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwsub_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwsub_vf_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_vf_f64m1_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwsub_wv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_wv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwsub_wv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwsub_wf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_wf_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwsub_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_vv_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwsub_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwsub_vf_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_vf_f64m2_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwsub_wv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_wv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwsub_wv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwsub_wf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_wf_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwsub_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_vv_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwsub_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwsub_vf_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_vf_f64m4_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwsub_wv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_wv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwsub_wv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwsub_wf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_wf_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwsub_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_vv_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwsub_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwsub_vf_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_vf_f64m8_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwsub_wv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_wv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwsub_wv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwsub_wf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_wf_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwsub_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwsub_wv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwsub_wf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwsub_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwsub_wv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_wv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwsub_wf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_wf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwsub_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwsub_wv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_wv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwsub_wf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_wf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwsub_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwsub_wv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_wv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwsub_wf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_wf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwsub_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwsub_wv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_wv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwsub_wf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_wf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwsub_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_vf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwsub_wv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_wv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwsub_wf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_wf_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwsub_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_vf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwsub_wv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_wv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwsub_wf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_wf_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwsub_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_vf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwsub_wv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_wv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwsub_wf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_wf_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwsub_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_vf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwsub_wv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_wv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwsub_wf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_wf_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwsub_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwsub_wv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwsub_wf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwsub_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwsub_wv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_wv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwsub_wf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_wf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwsub_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwsub_wv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_wv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwsub_wf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_wf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwsub_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwsub_wv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_wv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwsub_wf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_wf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwsub_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwsub_wv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_wv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwsub_wf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_wf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwsub_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_vf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwsub_wv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_wv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwsub_wf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_wf_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwsub_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_vf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwsub_wv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_wv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwsub_wf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_wf_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwsub_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_vf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwsub_wv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_wv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwsub_wf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_wf_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwsub_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_vf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwsub_wv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_wv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwsub_wf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_wf_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwsub_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwsub_wv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vfwsub_wf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwsub_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vfwsub_wv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_wv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vfwsub_wf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_wf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwsub_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vfwsub_wv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_wv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vfwsub_wf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_wf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwsub_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vfwsub_wv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_wv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vfwsub_wf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_wf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwsub_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vfwsub_wv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_wv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vfwsub_wf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_wf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwsub_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_vf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vfwsub_wv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_wv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vfwsub_wf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_wf_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwsub_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_vf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vfwsub_wv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_wv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vfwsub_wf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_wf_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwsub_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_vf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vfwsub_wv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_wv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vfwsub_wf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_wf_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwsub_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_vf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vfwsub_wv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_wv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vfwsub_wf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_wf_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vfwsub_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat16mf4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_wv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wv_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat16mf4_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32mf2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_wf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wf_f32mf2_rm_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32mf2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_vv_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_vf_f32m1_rm_tu(vfloat32m1_t vd, vfloat16mf2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_wv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_wv_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat16mf2_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_wf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_wf_f32m1_rm_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_vv_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_vf_f32m2_rm_tu(vfloat32m2_t vd, vfloat16m1_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_wv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_wv_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat16m1_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_wf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_wf_f32m2_rm_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_vv_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_vf_f32m4_rm_tu(vfloat32m4_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_wv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_wv_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_wf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_wf_f32m4_rm_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_vv_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwsub_vv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_vf_f32m8_rm_tu(vfloat32m8_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_vf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_wv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_wv_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vfwsub_wv_f32m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_wf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_wf_f32m8_rm_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vfwsub_wf_f32m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_vv_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwsub_vv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_vf_f64m1_rm_tu(vfloat64m1_t vd, vfloat32mf2_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_wv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_wv_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat32mf2_t vs1, size_t vl) { return __riscv_vfwsub_wv_f64m1_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_wf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_wf_f64m1_rm_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m1_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_vv_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwsub_vv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_vf_f64m2_rm_tu(vfloat64m2_t vd, vfloat32m1_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_wv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_wv_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat32m1_t vs1, size_t vl) { return __riscv_vfwsub_wv_f64m2_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_wf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_wf_f64m2_rm_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m2_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_vv_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwsub_vv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_vf_f64m4_rm_tu(vfloat64m4_t vd, vfloat32m2_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_wv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_wv_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat32m2_t vs1, size_t vl) { return __riscv_vfwsub_wv_f64m4_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_wf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_wf_f64m4_rm_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m4_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_vv_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwsub_vv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_vf_f64m8_rm_tu(vfloat64m8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_vf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_wv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_wv_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vfwsub_wv_f64m8_rm_tu(vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_wf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_wf_f64m8_rm_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + float rs1, size_t vl) { return __riscv_vfwsub_wf_f64m8_rm_tu(vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_wv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32mf2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_wf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32mf2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_wv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_wv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_wf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_wf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_wv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_wv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_wf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_wf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_wv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_wv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_wf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_wf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_wv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_wv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_wf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_wf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_vv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_vf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_wv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_wv_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m1_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_wf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_wf_f64m1_rm_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m1_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_vv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_vf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_wv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_wv_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m2_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_wf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_wf_f64m2_rm_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m2_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_vv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_vf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_wv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_wv_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m4_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_wf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_wf_f64m4_rm_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m4_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_vv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_vf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_wv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_wv_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m8_rm_tum(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_wf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_wf_f64m8_rm_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m8_rm_tum(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { - return __riscv_vfwsub_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwsub_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, + vfloat16mf4_t vs1, size_t vl) { + return __riscv_vfwsub_vv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfwsub_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwsub_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfwsub_vf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwsub_wv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { - return __riscv_vfwsub_wv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwsub_wv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, + vfloat16mf4_t vs1, size_t vl) { + return __riscv_vfwsub_wv_f32mf2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwsub_wf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { - return __riscv_vfwsub_wf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwsub_wf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { + return __riscv_vfwsub_wf_f32mf2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwsub_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_wv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_wv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_wf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_wf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_wv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_wv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_wf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_wf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_wv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_wv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_wf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_wf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_wv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_wv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_wf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_wf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_vv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_vf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_wv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_wv_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m1_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_wf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_wf_f64m1_rm_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m1_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_vv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_vf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_wv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_wv_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m2_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_wf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_wf_f64m2_rm_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m2_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_vv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_vf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_wv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_wv_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m4_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_wf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_wf_f64m4_rm_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m4_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_vv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_vf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_wv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_wv_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m8_rm_tumu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_wf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_wf_f64m8_rm_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m8_rm_tumu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_wv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32mf2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwsub_wf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32mf2_t test_vfwsub_wf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32mf2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_wv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vfloat16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vfwsub_wv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwsub_wf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m1_t test_vfwsub_wf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_wv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vfloat16m1_t vs1, size_t vl) { +vfloat32m2_t test_vfwsub_wv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwsub_wf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m2_t test_vfwsub_wf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat16m2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_wv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vfloat16m2_t vs1, size_t vl) { +vfloat32m4_t test_vfwsub_wv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vfloat16m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwsub_wf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m4_t test_vfwsub_wf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat16m4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_vf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_wv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vfloat16m4_t vs1, size_t vl) { +vfloat32m8_t test_vfwsub_wv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vfloat16m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f32m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwsub_wf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, _Float16 rs1, size_t vl) { +vfloat32m8_t test_vfwsub_wf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vfwsub_wf_f32m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_vv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_vf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat32mf2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_wv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vfloat32mf2_t vs1, size_t vl) { +vfloat64m1_t test_vfwsub_wv_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m1_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m1_t test_vfwsub_wf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, float rs1, size_t vl) { +vfloat64m1_t test_vfwsub_wf_f64m1_rm_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m1_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_vv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_vf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat32m1_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_wv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vfloat32m1_t vs1, size_t vl) { +vfloat64m2_t test_vfwsub_wv_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m2_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m2_t test_vfwsub_wf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, float rs1, size_t vl) { +vfloat64m2_t test_vfwsub_wf_f64m2_rm_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m2_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_vv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_vf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat32m2_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_wv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vfloat32m2_t vs1, size_t vl) { +vfloat64m4_t test_vfwsub_wv_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m4_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m4_t test_vfwsub_wf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, float rs1, size_t vl) { +vfloat64m4_t test_vfwsub_wf_f64m4_rm_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m4_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_vv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_vv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_vf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat32m4_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_vf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_wv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vfloat32m4_t vs1, size_t vl) { +vfloat64m8_t test_vfwsub_wv_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vfloat32m4_t vs1, + size_t vl) { return __riscv_vfwsub_wv_f64m8_rm_mu(vm, vd, vs2, vs1, __RISCV_FRM_RNE, vl); } -vfloat64m8_t test_vfwsub_wf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, float rs1, size_t vl) { +vfloat64m8_t test_vfwsub_wf_f64m8_rm_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, float rs1, + size_t vl) { return __riscv_vfwsub_wf_f64m8_rm_mu(vm, vd, vs2, rs1, __RISCV_FRM_RNE, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/viota.c b/auto-generated/policy_funcs/llvm-api-tests/viota.c index 4d6754430..77b9da2bd 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/viota.c +++ b/auto-generated/policy_funcs/llvm-api-tests/viota.c @@ -93,266 +93,332 @@ vuint64m8_t test_viota_m_u64m8_tu(vuint64m8_t vd, vbool8_t vs2, size_t vl) { return __riscv_viota_m_u64m8_tu(vd, vs2, vl); } -vuint8mf8_t test_viota_m_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vbool64_t vs2, size_t vl) { +vuint8mf8_t test_viota_m_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vbool64_t vs2, + size_t vl) { return __riscv_viota_m_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_viota_m_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vbool32_t vs2, size_t vl) { +vuint8mf4_t test_viota_m_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_viota_m_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vbool16_t vs2, size_t vl) { +vuint8mf2_t test_viota_m_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_viota_m_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vbool8_t vs2, size_t vl) { +vuint8m1_t test_viota_m_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_viota_m_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vbool4_t vs2, size_t vl) { +vuint8m2_t test_viota_m_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_viota_m_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vbool2_t vs2, size_t vl) { +vuint8m4_t test_viota_m_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vbool2_t vs2, + size_t vl) { return __riscv_viota_m_u8m4_tum(vm, vd, vs2, vl); } -vuint8m8_t test_viota_m_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vbool1_t vs2, size_t vl) { +vuint8m8_t test_viota_m_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vbool1_t vs2, + size_t vl) { return __riscv_viota_m_u8m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_viota_m_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vbool64_t vs2, size_t vl) { +vuint16mf4_t test_viota_m_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vbool64_t vs2, size_t vl) { return __riscv_viota_m_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_viota_m_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vbool32_t vs2, size_t vl) { +vuint16mf2_t test_viota_m_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vbool32_t vs2, size_t vl) { return __riscv_viota_m_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_viota_m_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vbool16_t vs2, size_t vl) { +vuint16m1_t test_viota_m_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_viota_m_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vbool8_t vs2, size_t vl) { +vuint16m2_t test_viota_m_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_viota_m_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vbool4_t vs2, size_t vl) { +vuint16m4_t test_viota_m_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_viota_m_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vbool2_t vs2, size_t vl) { +vuint16m8_t test_viota_m_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vbool2_t vs2, + size_t vl) { return __riscv_viota_m_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_viota_m_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vbool64_t vs2, size_t vl) { +vuint32mf2_t test_viota_m_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vbool64_t vs2, size_t vl) { return __riscv_viota_m_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_viota_m_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vbool32_t vs2, size_t vl) { +vuint32m1_t test_viota_m_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_viota_m_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vbool16_t vs2, size_t vl) { +vuint32m2_t test_viota_m_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_viota_m_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vbool8_t vs2, size_t vl) { +vuint32m4_t test_viota_m_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_viota_m_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vbool4_t vs2, size_t vl) { +vuint32m8_t test_viota_m_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_viota_m_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vbool64_t vs2, size_t vl) { +vuint64m1_t test_viota_m_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vbool64_t vs2, + size_t vl) { return __riscv_viota_m_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_viota_m_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vbool32_t vs2, size_t vl) { +vuint64m2_t test_viota_m_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_viota_m_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vbool16_t vs2, size_t vl) { +vuint64m4_t test_viota_m_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_viota_m_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vbool8_t vs2, size_t vl) { +vuint64m8_t test_viota_m_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u64m8_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_viota_m_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vbool64_t vs2, size_t vl) { +vuint8mf8_t test_viota_m_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vbool64_t vs2, + size_t vl) { return __riscv_viota_m_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_viota_m_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vbool32_t vs2, size_t vl) { +vuint8mf4_t test_viota_m_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_viota_m_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vbool16_t vs2, size_t vl) { +vuint8mf2_t test_viota_m_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_viota_m_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vbool8_t vs2, size_t vl) { +vuint8m1_t test_viota_m_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_viota_m_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vbool4_t vs2, size_t vl) { +vuint8m2_t test_viota_m_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_viota_m_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vbool2_t vs2, size_t vl) { +vuint8m4_t test_viota_m_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vbool2_t vs2, + size_t vl) { return __riscv_viota_m_u8m4_tumu(vm, vd, vs2, vl); } -vuint8m8_t test_viota_m_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vbool1_t vs2, size_t vl) { +vuint8m8_t test_viota_m_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vbool1_t vs2, + size_t vl) { return __riscv_viota_m_u8m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_viota_m_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vbool64_t vs2, size_t vl) { +vuint16mf4_t test_viota_m_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vbool64_t vs2, size_t vl) { return __riscv_viota_m_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_viota_m_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vbool32_t vs2, size_t vl) { +vuint16mf2_t test_viota_m_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vbool32_t vs2, size_t vl) { return __riscv_viota_m_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_viota_m_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vbool16_t vs2, size_t vl) { +vuint16m1_t test_viota_m_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_viota_m_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vbool8_t vs2, size_t vl) { +vuint16m2_t test_viota_m_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_viota_m_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vbool4_t vs2, size_t vl) { +vuint16m4_t test_viota_m_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_viota_m_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vbool2_t vs2, size_t vl) { +vuint16m8_t test_viota_m_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vbool2_t vs2, + size_t vl) { return __riscv_viota_m_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_viota_m_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vbool64_t vs2, size_t vl) { +vuint32mf2_t test_viota_m_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vbool64_t vs2, size_t vl) { return __riscv_viota_m_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_viota_m_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vbool32_t vs2, size_t vl) { +vuint32m1_t test_viota_m_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_viota_m_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vbool16_t vs2, size_t vl) { +vuint32m2_t test_viota_m_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_viota_m_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vbool8_t vs2, size_t vl) { +vuint32m4_t test_viota_m_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_viota_m_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vbool4_t vs2, size_t vl) { +vuint32m8_t test_viota_m_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_viota_m_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vbool64_t vs2, size_t vl) { +vuint64m1_t test_viota_m_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vbool64_t vs2, + size_t vl) { return __riscv_viota_m_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_viota_m_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vbool32_t vs2, size_t vl) { +vuint64m2_t test_viota_m_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_viota_m_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vbool16_t vs2, size_t vl) { +vuint64m4_t test_viota_m_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_viota_m_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vbool8_t vs2, size_t vl) { +vuint64m8_t test_viota_m_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u64m8_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_viota_m_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vbool64_t vs2, size_t vl) { +vuint8mf8_t test_viota_m_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vbool64_t vs2, + size_t vl) { return __riscv_viota_m_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_viota_m_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vbool32_t vs2, size_t vl) { +vuint8mf4_t test_viota_m_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_viota_m_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vbool16_t vs2, size_t vl) { +vuint8mf2_t test_viota_m_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_viota_m_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vbool8_t vs2, size_t vl) { +vuint8m1_t test_viota_m_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_viota_m_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vbool4_t vs2, size_t vl) { +vuint8m2_t test_viota_m_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_viota_m_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vbool2_t vs2, size_t vl) { +vuint8m4_t test_viota_m_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vbool2_t vs2, + size_t vl) { return __riscv_viota_m_u8m4_mu(vm, vd, vs2, vl); } -vuint8m8_t test_viota_m_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vbool1_t vs2, size_t vl) { +vuint8m8_t test_viota_m_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vbool1_t vs2, + size_t vl) { return __riscv_viota_m_u8m8_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_viota_m_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vbool64_t vs2, size_t vl) { +vuint16mf4_t test_viota_m_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vbool64_t vs2, size_t vl) { return __riscv_viota_m_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_viota_m_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vbool32_t vs2, size_t vl) { +vuint16mf2_t test_viota_m_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vbool32_t vs2, size_t vl) { return __riscv_viota_m_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_viota_m_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vbool16_t vs2, size_t vl) { +vuint16m1_t test_viota_m_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_viota_m_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vbool8_t vs2, size_t vl) { +vuint16m2_t test_viota_m_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_viota_m_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vbool4_t vs2, size_t vl) { +vuint16m4_t test_viota_m_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_viota_m_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vbool2_t vs2, size_t vl) { +vuint16m8_t test_viota_m_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vbool2_t vs2, + size_t vl) { return __riscv_viota_m_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_viota_m_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vbool64_t vs2, size_t vl) { +vuint32mf2_t test_viota_m_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vbool64_t vs2, size_t vl) { return __riscv_viota_m_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_viota_m_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vbool32_t vs2, size_t vl) { +vuint32m1_t test_viota_m_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_viota_m_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vbool16_t vs2, size_t vl) { +vuint32m2_t test_viota_m_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_viota_m_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vbool8_t vs2, size_t vl) { +vuint32m4_t test_viota_m_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_viota_m_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vbool4_t vs2, size_t vl) { +vuint32m8_t test_viota_m_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vbool4_t vs2, + size_t vl) { return __riscv_viota_m_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_viota_m_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vbool64_t vs2, size_t vl) { +vuint64m1_t test_viota_m_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vbool64_t vs2, + size_t vl) { return __riscv_viota_m_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_viota_m_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vbool32_t vs2, size_t vl) { +vuint64m2_t test_viota_m_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vbool32_t vs2, + size_t vl) { return __riscv_viota_m_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_viota_m_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vbool16_t vs2, size_t vl) { +vuint64m4_t test_viota_m_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vbool16_t vs2, + size_t vl) { return __riscv_viota_m_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_viota_m_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vbool8_t vs2, size_t vl) { +vuint64m8_t test_viota_m_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vbool8_t vs2, + size_t vl) { return __riscv_viota_m_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle16.c b/auto-generated/policy_funcs/llvm-api-tests/vle16.c index 1a4f1b364..753f3077c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle16.c @@ -6,35 +6,43 @@ #include -vfloat16mf4_t test_vle16_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4_t test_vle16_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + size_t vl) { return __riscv_vle16_v_f16mf4_tu(vd, rs1, vl); } -vfloat16mf2_t test_vle16_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2_t test_vle16_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + size_t vl) { return __riscv_vle16_v_f16mf2_tu(vd, rs1, vl); } -vfloat16m1_t test_vle16_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1_t test_vle16_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + size_t vl) { return __riscv_vle16_v_f16m1_tu(vd, rs1, vl); } -vfloat16m2_t test_vle16_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2_t test_vle16_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + size_t vl) { return __riscv_vle16_v_f16m2_tu(vd, rs1, vl); } -vfloat16m4_t test_vle16_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m4_t test_vle16_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + size_t vl) { return __riscv_vle16_v_f16m4_tu(vd, rs1, vl); } -vfloat16m8_t test_vle16_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m8_t test_vle16_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, + size_t vl) { return __riscv_vle16_v_f16m8_tu(vd, rs1, vl); } -vint16mf4_t test_vle16_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, size_t vl) { +vint16mf4_t test_vle16_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vle16_v_i16mf4_tu(vd, rs1, vl); } -vint16mf2_t test_vle16_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, size_t vl) { +vint16mf2_t test_vle16_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vle16_v_i16mf2_tu(vd, rs1, vl); } @@ -54,242 +62,302 @@ vint16m8_t test_vle16_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m8_tu(vd, rs1, vl); } -vuint16mf4_t test_vle16_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4_t test_vle16_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vle16_v_u16mf4_tu(vd, rs1, vl); } -vuint16mf2_t test_vle16_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2_t test_vle16_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vle16_v_u16mf2_tu(vd, rs1, vl); } -vuint16m1_t test_vle16_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1_t test_vle16_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vle16_v_u16m1_tu(vd, rs1, vl); } -vuint16m2_t test_vle16_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2_t test_vle16_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vle16_v_u16m2_tu(vd, rs1, vl); } -vuint16m4_t test_vle16_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m4_t test_vle16_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vle16_v_u16m4_tu(vd, rs1, vl); } -vuint16m8_t test_vle16_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, size_t vl) { +vuint16m8_t test_vle16_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vle16_v_u16m8_tu(vd, rs1, vl); } -vfloat16mf4_t test_vle16_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4_t test_vle16_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16mf4_tum(vm, vd, rs1, vl); } -vfloat16mf2_t test_vle16_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2_t test_vle16_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16mf2_tum(vm, vd, rs1, vl); } -vfloat16m1_t test_vle16_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1_t test_vle16_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m1_tum(vm, vd, rs1, vl); } -vfloat16m2_t test_vle16_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2_t test_vle16_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m2_tum(vm, vd, rs1, vl); } -vfloat16m4_t test_vle16_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m4_t test_vle16_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m4_tum(vm, vd, rs1, vl); } -vfloat16m8_t test_vle16_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m8_t test_vle16_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m8_tum(vm, vd, rs1, vl); } -vint16mf4_t test_vle16_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, size_t vl) { +vint16mf4_t test_vle16_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16mf4_tum(vm, vd, rs1, vl); } -vint16mf2_t test_vle16_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, size_t vl) { +vint16mf2_t test_vle16_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16mf2_tum(vm, vd, rs1, vl); } -vint16m1_t test_vle16_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, size_t vl) { +vint16m1_t test_vle16_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m1_tum(vm, vd, rs1, vl); } -vint16m2_t test_vle16_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, size_t vl) { +vint16m2_t test_vle16_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m2_tum(vm, vd, rs1, vl); } -vint16m4_t test_vle16_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, size_t vl) { +vint16m4_t test_vle16_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m4_tum(vm, vd, rs1, vl); } -vint16m8_t test_vle16_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, size_t vl) { +vint16m8_t test_vle16_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m8_tum(vm, vd, rs1, vl); } -vuint16mf4_t test_vle16_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4_t test_vle16_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16mf4_tum(vm, vd, rs1, vl); } -vuint16mf2_t test_vle16_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2_t test_vle16_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16mf2_tum(vm, vd, rs1, vl); } -vuint16m1_t test_vle16_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1_t test_vle16_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m1_tum(vm, vd, rs1, vl); } -vuint16m2_t test_vle16_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2_t test_vle16_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m2_tum(vm, vd, rs1, vl); } -vuint16m4_t test_vle16_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m4_t test_vle16_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m4_tum(vm, vd, rs1, vl); } -vuint16m8_t test_vle16_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, size_t vl) { +vuint16m8_t test_vle16_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m8_tum(vm, vd, rs1, vl); } -vfloat16mf4_t test_vle16_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4_t test_vle16_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16mf4_tumu(vm, vd, rs1, vl); } -vfloat16mf2_t test_vle16_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2_t test_vle16_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16mf2_tumu(vm, vd, rs1, vl); } -vfloat16m1_t test_vle16_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1_t test_vle16_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m1_tumu(vm, vd, rs1, vl); } -vfloat16m2_t test_vle16_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2_t test_vle16_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m2_tumu(vm, vd, rs1, vl); } -vfloat16m4_t test_vle16_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m4_t test_vle16_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m4_tumu(vm, vd, rs1, vl); } -vfloat16m8_t test_vle16_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m8_t test_vle16_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m8_tumu(vm, vd, rs1, vl); } -vint16mf4_t test_vle16_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, size_t vl) { +vint16mf4_t test_vle16_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16mf4_tumu(vm, vd, rs1, vl); } -vint16mf2_t test_vle16_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, size_t vl) { +vint16mf2_t test_vle16_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16mf2_tumu(vm, vd, rs1, vl); } -vint16m1_t test_vle16_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, size_t vl) { +vint16m1_t test_vle16_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m1_tumu(vm, vd, rs1, vl); } -vint16m2_t test_vle16_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, size_t vl) { +vint16m2_t test_vle16_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m2_tumu(vm, vd, rs1, vl); } -vint16m4_t test_vle16_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, size_t vl) { +vint16m4_t test_vle16_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m4_tumu(vm, vd, rs1, vl); } -vint16m8_t test_vle16_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, size_t vl) { +vint16m8_t test_vle16_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m8_tumu(vm, vd, rs1, vl); } -vuint16mf4_t test_vle16_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4_t test_vle16_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16mf4_tumu(vm, vd, rs1, vl); } -vuint16mf2_t test_vle16_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2_t test_vle16_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16mf2_tumu(vm, vd, rs1, vl); } -vuint16m1_t test_vle16_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1_t test_vle16_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m1_tumu(vm, vd, rs1, vl); } -vuint16m2_t test_vle16_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2_t test_vle16_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m2_tumu(vm, vd, rs1, vl); } -vuint16m4_t test_vle16_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m4_t test_vle16_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m4_tumu(vm, vd, rs1, vl); } -vuint16m8_t test_vle16_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, size_t vl) { +vuint16m8_t test_vle16_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m8_tumu(vm, vd, rs1, vl); } -vfloat16mf4_t test_vle16_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4_t test_vle16_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16mf4_mu(vm, vd, rs1, vl); } -vfloat16mf2_t test_vle16_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2_t test_vle16_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16mf2_mu(vm, vd, rs1, vl); } -vfloat16m1_t test_vle16_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1_t test_vle16_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m1_mu(vm, vd, rs1, vl); } -vfloat16m2_t test_vle16_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2_t test_vle16_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m2_mu(vm, vd, rs1, vl); } -vfloat16m4_t test_vle16_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m4_t test_vle16_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m4_mu(vm, vd, rs1, vl); } -vfloat16m8_t test_vle16_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m8_t test_vle16_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vle16_v_f16m8_mu(vm, vd, rs1, vl); } -vint16mf4_t test_vle16_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, size_t vl) { +vint16mf4_t test_vle16_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16mf4_mu(vm, vd, rs1, vl); } -vint16mf2_t test_vle16_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, size_t vl) { +vint16mf2_t test_vle16_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16mf2_mu(vm, vd, rs1, vl); } -vint16m1_t test_vle16_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, size_t vl) { +vint16m1_t test_vle16_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vle16_v_i16m1_mu(vm, vd, rs1, vl); } -vint16m2_t test_vle16_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, size_t vl) { +vint16m2_t test_vle16_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vle16_v_i16m2_mu(vm, vd, rs1, vl); } -vint16m4_t test_vle16_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, size_t vl) { +vint16m4_t test_vle16_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vle16_v_i16m4_mu(vm, vd, rs1, vl); } -vint16m8_t test_vle16_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, size_t vl) { +vint16m8_t test_vle16_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vle16_v_i16m8_mu(vm, vd, rs1, vl); } -vuint16mf4_t test_vle16_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4_t test_vle16_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16mf4_mu(vm, vd, rs1, vl); } -vuint16mf2_t test_vle16_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2_t test_vle16_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16mf2_mu(vm, vd, rs1, vl); } -vuint16m1_t test_vle16_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1_t test_vle16_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m1_mu(vm, vd, rs1, vl); } -vuint16m2_t test_vle16_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2_t test_vle16_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m2_mu(vm, vd, rs1, vl); } -vuint16m4_t test_vle16_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m4_t test_vle16_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m4_mu(vm, vd, rs1, vl); } -vuint16m8_t test_vle16_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, size_t vl) { +vuint16m8_t test_vle16_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vle16_v_u16m8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vle16ff.c index 062c926aa..ce6044451 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle16ff.c @@ -6,290 +6,416 @@ #include -vfloat16mf4_t test_vle16ff_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4_t test_vle16ff_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_f16mf4_tu(vd, rs1, new_vl, vl); } -vfloat16mf2_t test_vle16ff_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2_t test_vle16ff_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_f16mf2_tu(vd, rs1, new_vl, vl); } -vfloat16m1_t test_vle16ff_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1_t test_vle16ff_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_f16m1_tu(vd, rs1, new_vl, vl); } -vfloat16m2_t test_vle16ff_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2_t test_vle16ff_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_f16m2_tu(vd, rs1, new_vl, vl); } -vfloat16m4_t test_vle16ff_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m4_t test_vle16ff_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_f16m4_tu(vd, rs1, new_vl, vl); } -vfloat16m8_t test_vle16ff_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m8_t test_vle16ff_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_f16m8_tu(vd, rs1, new_vl, vl); } -vint16mf4_t test_vle16ff_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4_t test_vle16ff_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_i16mf4_tu(vd, rs1, new_vl, vl); } -vint16mf2_t test_vle16ff_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2_t test_vle16ff_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_i16mf2_tu(vd, rs1, new_vl, vl); } -vint16m1_t test_vle16ff_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1_t test_vle16ff_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_i16m1_tu(vd, rs1, new_vl, vl); } -vint16m2_t test_vle16ff_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2_t test_vle16ff_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_i16m2_tu(vd, rs1, new_vl, vl); } -vint16m4_t test_vle16ff_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m4_t test_vle16ff_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_i16m4_tu(vd, rs1, new_vl, vl); } -vint16m8_t test_vle16ff_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m8_t test_vle16ff_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_i16m8_tu(vd, rs1, new_vl, vl); } -vuint16mf4_t test_vle16ff_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4_t test_vle16ff_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_u16mf4_tu(vd, rs1, new_vl, vl); } -vuint16mf2_t test_vle16ff_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2_t test_vle16ff_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_u16mf2_tu(vd, rs1, new_vl, vl); } -vuint16m1_t test_vle16ff_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1_t test_vle16ff_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_u16m1_tu(vd, rs1, new_vl, vl); } -vuint16m2_t test_vle16ff_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2_t test_vle16ff_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_u16m2_tu(vd, rs1, new_vl, vl); } -vuint16m4_t test_vle16ff_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m4_t test_vle16ff_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_u16m4_tu(vd, rs1, new_vl, vl); } -vuint16m8_t test_vle16ff_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m8_t test_vle16ff_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_u16m8_tu(vd, rs1, new_vl, vl); } -vfloat16mf4_t test_vle16ff_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4_t test_vle16ff_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16mf4_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf2_t test_vle16ff_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2_t test_vle16ff_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16mf2_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m1_t test_vle16ff_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1_t test_vle16ff_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m1_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m2_t test_vle16ff_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2_t test_vle16ff_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m2_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m4_t test_vle16ff_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m4_t test_vle16ff_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m4_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m8_t test_vle16ff_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m8_t test_vle16ff_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m8_tum(vm, vd, rs1, new_vl, vl); } -vint16mf4_t test_vle16ff_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4_t test_vle16ff_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16mf4_tum(vm, vd, rs1, new_vl, vl); } -vint16mf2_t test_vle16ff_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2_t test_vle16ff_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16mf2_tum(vm, vd, rs1, new_vl, vl); } -vint16m1_t test_vle16ff_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1_t test_vle16ff_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m1_tum(vm, vd, rs1, new_vl, vl); } -vint16m2_t test_vle16ff_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2_t test_vle16ff_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m2_tum(vm, vd, rs1, new_vl, vl); } -vint16m4_t test_vle16ff_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m4_t test_vle16ff_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m4_tum(vm, vd, rs1, new_vl, vl); } -vint16m8_t test_vle16ff_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m8_t test_vle16ff_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m8_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf4_t test_vle16ff_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4_t test_vle16ff_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16mf4_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf2_t test_vle16ff_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2_t test_vle16ff_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16mf2_tum(vm, vd, rs1, new_vl, vl); } -vuint16m1_t test_vle16ff_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1_t test_vle16ff_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m1_tum(vm, vd, rs1, new_vl, vl); } -vuint16m2_t test_vle16ff_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2_t test_vle16ff_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m2_tum(vm, vd, rs1, new_vl, vl); } -vuint16m4_t test_vle16ff_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m4_t test_vle16ff_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m4_tum(vm, vd, rs1, new_vl, vl); } -vuint16m8_t test_vle16ff_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m8_t test_vle16ff_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m8_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf4_t test_vle16ff_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4_t test_vle16ff_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16mf4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2_t test_vle16ff_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2_t test_vle16ff_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16mf2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m1_t test_vle16ff_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1_t test_vle16ff_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m1_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m2_t test_vle16ff_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2_t test_vle16ff_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m4_t test_vle16ff_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m4_t test_vle16ff_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m8_t test_vle16ff_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m8_t test_vle16ff_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m8_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf4_t test_vle16ff_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4_t test_vle16ff_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16mf4_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf2_t test_vle16ff_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2_t test_vle16ff_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16mf2_tumu(vm, vd, rs1, new_vl, vl); } -vint16m1_t test_vle16ff_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1_t test_vle16ff_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m1_tumu(vm, vd, rs1, new_vl, vl); } -vint16m2_t test_vle16ff_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2_t test_vle16ff_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m2_tumu(vm, vd, rs1, new_vl, vl); } -vint16m4_t test_vle16ff_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m4_t test_vle16ff_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m4_tumu(vm, vd, rs1, new_vl, vl); } -vint16m8_t test_vle16ff_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m8_t test_vle16ff_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m8_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf4_t test_vle16ff_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4_t test_vle16ff_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16mf4_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf2_t test_vle16ff_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2_t test_vle16ff_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16mf2_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m1_t test_vle16ff_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1_t test_vle16ff_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m1_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m2_t test_vle16ff_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2_t test_vle16ff_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m2_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m4_t test_vle16ff_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m4_t test_vle16ff_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m4_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m8_t test_vle16ff_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m8_t test_vle16ff_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf4_t test_vle16ff_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4_t test_vle16ff_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16mf4_mu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2_t test_vle16ff_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2_t test_vle16ff_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16mf2_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m1_t test_vle16ff_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1_t test_vle16ff_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m1_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m2_t test_vle16ff_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2_t test_vle16ff_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m2_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m4_t test_vle16ff_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m4_t test_vle16ff_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m4_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m8_t test_vle16ff_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m8_t test_vle16ff_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_f16m8_mu(vm, vd, rs1, new_vl, vl); } -vint16mf4_t test_vle16ff_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4_t test_vle16ff_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16mf4_mu(vm, vd, rs1, new_vl, vl); } -vint16mf2_t test_vle16ff_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2_t test_vle16ff_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16mf2_mu(vm, vd, rs1, new_vl, vl); } -vint16m1_t test_vle16ff_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1_t test_vle16ff_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m1_mu(vm, vd, rs1, new_vl, vl); } -vint16m2_t test_vle16ff_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2_t test_vle16ff_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m2_mu(vm, vd, rs1, new_vl, vl); } -vint16m4_t test_vle16ff_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m4_t test_vle16ff_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m4_mu(vm, vd, rs1, new_vl, vl); } -vint16m8_t test_vle16ff_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m8_t test_vle16ff_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_i16m8_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf4_t test_vle16ff_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4_t test_vle16ff_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16mf4_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf2_t test_vle16ff_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2_t test_vle16ff_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16mf2_mu(vm, vd, rs1, new_vl, vl); } -vuint16m1_t test_vle16ff_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1_t test_vle16ff_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m1_mu(vm, vd, rs1, new_vl, vl); } -vuint16m2_t test_vle16ff_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2_t test_vle16ff_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m2_mu(vm, vd, rs1, new_vl, vl); } -vuint16m4_t test_vle16ff_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m4_t test_vle16ff_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m4_mu(vm, vd, rs1, new_vl, vl); } -vuint16m8_t test_vle16ff_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m8_t test_vle16ff_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_u16m8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle32.c b/auto-generated/policy_funcs/llvm-api-tests/vle32.c index e35c73a94..0f03f0331 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle32.c @@ -6,27 +6,33 @@ #include -vfloat32mf2_t test_vle32_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, size_t vl) { +vfloat32mf2_t test_vle32_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + size_t vl) { return __riscv_vle32_v_f32mf2_tu(vd, rs1, vl); } -vfloat32m1_t test_vle32_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, size_t vl) { +vfloat32m1_t test_vle32_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + size_t vl) { return __riscv_vle32_v_f32m1_tu(vd, rs1, vl); } -vfloat32m2_t test_vle32_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, size_t vl) { +vfloat32m2_t test_vle32_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + size_t vl) { return __riscv_vle32_v_f32m2_tu(vd, rs1, vl); } -vfloat32m4_t test_vle32_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, size_t vl) { +vfloat32m4_t test_vle32_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + size_t vl) { return __riscv_vle32_v_f32m4_tu(vd, rs1, vl); } -vfloat32m8_t test_vle32_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, size_t vl) { +vfloat32m8_t test_vle32_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + size_t vl) { return __riscv_vle32_v_f32m8_tu(vd, rs1, vl); } -vint32mf2_t test_vle32_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, size_t vl) { +vint32mf2_t test_vle32_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vle32_v_i32mf2_tu(vd, rs1, vl); } @@ -46,202 +52,252 @@ vint32m8_t test_vle32_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m8_tu(vd, rs1, vl); } -vuint32mf2_t test_vle32_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2_t test_vle32_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vle32_v_u32mf2_tu(vd, rs1, vl); } -vuint32m1_t test_vle32_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1_t test_vle32_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vle32_v_u32m1_tu(vd, rs1, vl); } -vuint32m2_t test_vle32_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2_t test_vle32_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vle32_v_u32m2_tu(vd, rs1, vl); } -vuint32m4_t test_vle32_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m4_t test_vle32_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vle32_v_u32m4_tu(vd, rs1, vl); } -vuint32m8_t test_vle32_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, size_t vl) { +vuint32m8_t test_vle32_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vle32_v_u32m8_tu(vd, rs1, vl); } -vfloat32mf2_t test_vle32_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, size_t vl) { +vfloat32mf2_t test_vle32_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32mf2_tum(vm, vd, rs1, vl); } -vfloat32m1_t test_vle32_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, size_t vl) { +vfloat32m1_t test_vle32_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m1_tum(vm, vd, rs1, vl); } -vfloat32m2_t test_vle32_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, size_t vl) { +vfloat32m2_t test_vle32_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m2_tum(vm, vd, rs1, vl); } -vfloat32m4_t test_vle32_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, size_t vl) { +vfloat32m4_t test_vle32_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m4_tum(vm, vd, rs1, vl); } -vfloat32m8_t test_vle32_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, size_t vl) { +vfloat32m8_t test_vle32_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m8_tum(vm, vd, rs1, vl); } -vint32mf2_t test_vle32_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, size_t vl) { +vint32mf2_t test_vle32_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32mf2_tum(vm, vd, rs1, vl); } -vint32m1_t test_vle32_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, size_t vl) { +vint32m1_t test_vle32_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m1_tum(vm, vd, rs1, vl); } -vint32m2_t test_vle32_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, size_t vl) { +vint32m2_t test_vle32_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m2_tum(vm, vd, rs1, vl); } -vint32m4_t test_vle32_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, size_t vl) { +vint32m4_t test_vle32_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m4_tum(vm, vd, rs1, vl); } -vint32m8_t test_vle32_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, size_t vl) { +vint32m8_t test_vle32_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m8_tum(vm, vd, rs1, vl); } -vuint32mf2_t test_vle32_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2_t test_vle32_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32mf2_tum(vm, vd, rs1, vl); } -vuint32m1_t test_vle32_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1_t test_vle32_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m1_tum(vm, vd, rs1, vl); } -vuint32m2_t test_vle32_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2_t test_vle32_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m2_tum(vm, vd, rs1, vl); } -vuint32m4_t test_vle32_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m4_t test_vle32_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m4_tum(vm, vd, rs1, vl); } -vuint32m8_t test_vle32_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, size_t vl) { +vuint32m8_t test_vle32_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m8_tum(vm, vd, rs1, vl); } -vfloat32mf2_t test_vle32_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, size_t vl) { +vfloat32mf2_t test_vle32_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32mf2_tumu(vm, vd, rs1, vl); } -vfloat32m1_t test_vle32_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, size_t vl) { +vfloat32m1_t test_vle32_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m1_tumu(vm, vd, rs1, vl); } -vfloat32m2_t test_vle32_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, size_t vl) { +vfloat32m2_t test_vle32_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m2_tumu(vm, vd, rs1, vl); } -vfloat32m4_t test_vle32_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, size_t vl) { +vfloat32m4_t test_vle32_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m4_tumu(vm, vd, rs1, vl); } -vfloat32m8_t test_vle32_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, size_t vl) { +vfloat32m8_t test_vle32_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m8_tumu(vm, vd, rs1, vl); } -vint32mf2_t test_vle32_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, size_t vl) { +vint32mf2_t test_vle32_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32mf2_tumu(vm, vd, rs1, vl); } -vint32m1_t test_vle32_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, size_t vl) { +vint32m1_t test_vle32_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m1_tumu(vm, vd, rs1, vl); } -vint32m2_t test_vle32_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, size_t vl) { +vint32m2_t test_vle32_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m2_tumu(vm, vd, rs1, vl); } -vint32m4_t test_vle32_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, size_t vl) { +vint32m4_t test_vle32_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m4_tumu(vm, vd, rs1, vl); } -vint32m8_t test_vle32_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, size_t vl) { +vint32m8_t test_vle32_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m8_tumu(vm, vd, rs1, vl); } -vuint32mf2_t test_vle32_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2_t test_vle32_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32mf2_tumu(vm, vd, rs1, vl); } -vuint32m1_t test_vle32_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1_t test_vle32_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m1_tumu(vm, vd, rs1, vl); } -vuint32m2_t test_vle32_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2_t test_vle32_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m2_tumu(vm, vd, rs1, vl); } -vuint32m4_t test_vle32_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m4_t test_vle32_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m4_tumu(vm, vd, rs1, vl); } -vuint32m8_t test_vle32_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, size_t vl) { +vuint32m8_t test_vle32_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m8_tumu(vm, vd, rs1, vl); } -vfloat32mf2_t test_vle32_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, size_t vl) { +vfloat32mf2_t test_vle32_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32mf2_mu(vm, vd, rs1, vl); } -vfloat32m1_t test_vle32_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, size_t vl) { +vfloat32m1_t test_vle32_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m1_mu(vm, vd, rs1, vl); } -vfloat32m2_t test_vle32_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, size_t vl) { +vfloat32m2_t test_vle32_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m2_mu(vm, vd, rs1, vl); } -vfloat32m4_t test_vle32_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, size_t vl) { +vfloat32m4_t test_vle32_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m4_mu(vm, vd, rs1, vl); } -vfloat32m8_t test_vle32_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, size_t vl) { +vfloat32m8_t test_vle32_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, size_t vl) { return __riscv_vle32_v_f32m8_mu(vm, vd, rs1, vl); } -vint32mf2_t test_vle32_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, size_t vl) { +vint32mf2_t test_vle32_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32mf2_mu(vm, vd, rs1, vl); } -vint32m1_t test_vle32_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, size_t vl) { +vint32m1_t test_vle32_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m1_mu(vm, vd, rs1, vl); } -vint32m2_t test_vle32_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, size_t vl) { +vint32m2_t test_vle32_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vle32_v_i32m2_mu(vm, vd, rs1, vl); } -vint32m4_t test_vle32_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, size_t vl) { +vint32m4_t test_vle32_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vle32_v_i32m4_mu(vm, vd, rs1, vl); } -vint32m8_t test_vle32_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, size_t vl) { +vint32m8_t test_vle32_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vle32_v_i32m8_mu(vm, vd, rs1, vl); } -vuint32mf2_t test_vle32_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2_t test_vle32_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32mf2_mu(vm, vd, rs1, vl); } -vuint32m1_t test_vle32_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1_t test_vle32_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m1_mu(vm, vd, rs1, vl); } -vuint32m2_t test_vle32_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2_t test_vle32_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m2_mu(vm, vd, rs1, vl); } -vuint32m4_t test_vle32_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m4_t test_vle32_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m4_mu(vm, vd, rs1, vl); } -vuint32m8_t test_vle32_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, size_t vl) { +vuint32m8_t test_vle32_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vle32_v_u32m8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vle32ff.c index 0f94d01a0..7a14a5cce 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle32ff.c @@ -6,242 +6,347 @@ #include -vfloat32mf2_t test_vle32ff_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2_t test_vle32ff_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_f32mf2_tu(vd, rs1, new_vl, vl); } -vfloat32m1_t test_vle32ff_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1_t test_vle32ff_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_f32m1_tu(vd, rs1, new_vl, vl); } -vfloat32m2_t test_vle32ff_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2_t test_vle32ff_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_f32m2_tu(vd, rs1, new_vl, vl); } -vfloat32m4_t test_vle32ff_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m4_t test_vle32ff_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_f32m4_tu(vd, rs1, new_vl, vl); } -vfloat32m8_t test_vle32ff_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m8_t test_vle32ff_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_f32m8_tu(vd, rs1, new_vl, vl); } -vint32mf2_t test_vle32ff_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2_t test_vle32ff_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_i32mf2_tu(vd, rs1, new_vl, vl); } -vint32m1_t test_vle32ff_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1_t test_vle32ff_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_i32m1_tu(vd, rs1, new_vl, vl); } -vint32m2_t test_vle32ff_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2_t test_vle32ff_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_i32m2_tu(vd, rs1, new_vl, vl); } -vint32m4_t test_vle32ff_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m4_t test_vle32ff_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_i32m4_tu(vd, rs1, new_vl, vl); } -vint32m8_t test_vle32ff_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m8_t test_vle32ff_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_i32m8_tu(vd, rs1, new_vl, vl); } -vuint32mf2_t test_vle32ff_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2_t test_vle32ff_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_u32mf2_tu(vd, rs1, new_vl, vl); } -vuint32m1_t test_vle32ff_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1_t test_vle32ff_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_u32m1_tu(vd, rs1, new_vl, vl); } -vuint32m2_t test_vle32ff_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2_t test_vle32ff_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_u32m2_tu(vd, rs1, new_vl, vl); } -vuint32m4_t test_vle32ff_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m4_t test_vle32ff_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_u32m4_tu(vd, rs1, new_vl, vl); } -vuint32m8_t test_vle32ff_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m8_t test_vle32ff_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle32ff_v_u32m8_tu(vd, rs1, new_vl, vl); } -vfloat32mf2_t test_vle32ff_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2_t test_vle32ff_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32mf2_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m1_t test_vle32ff_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1_t test_vle32ff_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m1_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m2_t test_vle32ff_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2_t test_vle32ff_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m2_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m4_t test_vle32ff_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m4_t test_vle32ff_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m4_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m8_t test_vle32ff_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m8_t test_vle32ff_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m8_tum(vm, vd, rs1, new_vl, vl); } -vint32mf2_t test_vle32ff_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2_t test_vle32ff_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32mf2_tum(vm, vd, rs1, new_vl, vl); } -vint32m1_t test_vle32ff_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1_t test_vle32ff_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m1_tum(vm, vd, rs1, new_vl, vl); } -vint32m2_t test_vle32ff_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2_t test_vle32ff_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m2_tum(vm, vd, rs1, new_vl, vl); } -vint32m4_t test_vle32ff_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m4_t test_vle32ff_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m4_tum(vm, vd, rs1, new_vl, vl); } -vint32m8_t test_vle32ff_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m8_t test_vle32ff_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m8_tum(vm, vd, rs1, new_vl, vl); } -vuint32mf2_t test_vle32ff_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2_t test_vle32ff_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32mf2_tum(vm, vd, rs1, new_vl, vl); } -vuint32m1_t test_vle32ff_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1_t test_vle32ff_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m1_tum(vm, vd, rs1, new_vl, vl); } -vuint32m2_t test_vle32ff_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2_t test_vle32ff_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m2_tum(vm, vd, rs1, new_vl, vl); } -vuint32m4_t test_vle32ff_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m4_t test_vle32ff_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m4_tum(vm, vd, rs1, new_vl, vl); } -vuint32m8_t test_vle32ff_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m8_t test_vle32ff_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m8_tum(vm, vd, rs1, new_vl, vl); } -vfloat32mf2_t test_vle32ff_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2_t test_vle32ff_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32mf2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m1_t test_vle32ff_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1_t test_vle32ff_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m1_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m2_t test_vle32ff_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2_t test_vle32ff_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m4_t test_vle32ff_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m4_t test_vle32ff_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m8_t test_vle32ff_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m8_t test_vle32ff_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m8_tumu(vm, vd, rs1, new_vl, vl); } -vint32mf2_t test_vle32ff_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2_t test_vle32ff_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32mf2_tumu(vm, vd, rs1, new_vl, vl); } -vint32m1_t test_vle32ff_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1_t test_vle32ff_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m1_tumu(vm, vd, rs1, new_vl, vl); } -vint32m2_t test_vle32ff_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2_t test_vle32ff_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m2_tumu(vm, vd, rs1, new_vl, vl); } -vint32m4_t test_vle32ff_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m4_t test_vle32ff_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m4_tumu(vm, vd, rs1, new_vl, vl); } -vint32m8_t test_vle32ff_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m8_t test_vle32ff_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m8_tumu(vm, vd, rs1, new_vl, vl); } -vuint32mf2_t test_vle32ff_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2_t test_vle32ff_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32mf2_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m1_t test_vle32ff_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1_t test_vle32ff_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m1_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m2_t test_vle32ff_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2_t test_vle32ff_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m2_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m4_t test_vle32ff_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m4_t test_vle32ff_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m4_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m8_t test_vle32ff_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m8_t test_vle32ff_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32mf2_t test_vle32ff_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2_t test_vle32ff_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32mf2_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m1_t test_vle32ff_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1_t test_vle32ff_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m1_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m2_t test_vle32ff_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2_t test_vle32ff_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m2_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m4_t test_vle32ff_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m4_t test_vle32ff_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m4_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m8_t test_vle32ff_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m8_t test_vle32ff_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_f32m8_mu(vm, vd, rs1, new_vl, vl); } -vint32mf2_t test_vle32ff_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2_t test_vle32ff_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32mf2_mu(vm, vd, rs1, new_vl, vl); } -vint32m1_t test_vle32ff_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1_t test_vle32ff_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m1_mu(vm, vd, rs1, new_vl, vl); } -vint32m2_t test_vle32ff_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2_t test_vle32ff_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m2_mu(vm, vd, rs1, new_vl, vl); } -vint32m4_t test_vle32ff_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m4_t test_vle32ff_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m4_mu(vm, vd, rs1, new_vl, vl); } -vint32m8_t test_vle32ff_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m8_t test_vle32ff_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_i32m8_mu(vm, vd, rs1, new_vl, vl); } -vuint32mf2_t test_vle32ff_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2_t test_vle32ff_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32mf2_mu(vm, vd, rs1, new_vl, vl); } -vuint32m1_t test_vle32ff_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1_t test_vle32ff_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m1_mu(vm, vd, rs1, new_vl, vl); } -vuint32m2_t test_vle32ff_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2_t test_vle32ff_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m2_mu(vm, vd, rs1, new_vl, vl); } -vuint32m4_t test_vle32ff_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m4_t test_vle32ff_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m4_mu(vm, vd, rs1, new_vl, vl); } -vuint32m8_t test_vle32ff_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m8_t test_vle32ff_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle32ff_v_u32m8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle64.c b/auto-generated/policy_funcs/llvm-api-tests/vle64.c index c1afd72a4..8582f9410 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle64.c @@ -6,19 +6,23 @@ #include -vfloat64m1_t test_vle64_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, size_t vl) { +vfloat64m1_t test_vle64_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + size_t vl) { return __riscv_vle64_v_f64m1_tu(vd, rs1, vl); } -vfloat64m2_t test_vle64_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, size_t vl) { +vfloat64m2_t test_vle64_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + size_t vl) { return __riscv_vle64_v_f64m2_tu(vd, rs1, vl); } -vfloat64m4_t test_vle64_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, size_t vl) { +vfloat64m4_t test_vle64_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + size_t vl) { return __riscv_vle64_v_f64m4_tu(vd, rs1, vl); } -vfloat64m8_t test_vle64_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, size_t vl) { +vfloat64m8_t test_vle64_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + size_t vl) { return __riscv_vle64_v_f64m8_tu(vd, rs1, vl); } @@ -38,162 +42,202 @@ vint64m8_t test_vle64_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m8_tu(vd, rs1, vl); } -vuint64m1_t test_vle64_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1_t test_vle64_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vle64_v_u64m1_tu(vd, rs1, vl); } -vuint64m2_t test_vle64_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2_t test_vle64_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vle64_v_u64m2_tu(vd, rs1, vl); } -vuint64m4_t test_vle64_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m4_t test_vle64_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vle64_v_u64m4_tu(vd, rs1, vl); } -vuint64m8_t test_vle64_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, size_t vl) { +vuint64m8_t test_vle64_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vle64_v_u64m8_tu(vd, rs1, vl); } -vfloat64m1_t test_vle64_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, size_t vl) { +vfloat64m1_t test_vle64_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m1_tum(vm, vd, rs1, vl); } -vfloat64m2_t test_vle64_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, size_t vl) { +vfloat64m2_t test_vle64_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m2_tum(vm, vd, rs1, vl); } -vfloat64m4_t test_vle64_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, size_t vl) { +vfloat64m4_t test_vle64_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m4_tum(vm, vd, rs1, vl); } -vfloat64m8_t test_vle64_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, size_t vl) { +vfloat64m8_t test_vle64_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m8_tum(vm, vd, rs1, vl); } -vint64m1_t test_vle64_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, size_t vl) { +vint64m1_t test_vle64_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m1_tum(vm, vd, rs1, vl); } -vint64m2_t test_vle64_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, size_t vl) { +vint64m2_t test_vle64_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m2_tum(vm, vd, rs1, vl); } -vint64m4_t test_vle64_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, size_t vl) { +vint64m4_t test_vle64_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m4_tum(vm, vd, rs1, vl); } -vint64m8_t test_vle64_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, size_t vl) { +vint64m8_t test_vle64_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m8_tum(vm, vd, rs1, vl); } -vuint64m1_t test_vle64_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1_t test_vle64_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m1_tum(vm, vd, rs1, vl); } -vuint64m2_t test_vle64_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2_t test_vle64_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m2_tum(vm, vd, rs1, vl); } -vuint64m4_t test_vle64_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m4_t test_vle64_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m4_tum(vm, vd, rs1, vl); } -vuint64m8_t test_vle64_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, size_t vl) { +vuint64m8_t test_vle64_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m8_tum(vm, vd, rs1, vl); } -vfloat64m1_t test_vle64_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, size_t vl) { +vfloat64m1_t test_vle64_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m1_tumu(vm, vd, rs1, vl); } -vfloat64m2_t test_vle64_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, size_t vl) { +vfloat64m2_t test_vle64_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m2_tumu(vm, vd, rs1, vl); } -vfloat64m4_t test_vle64_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, size_t vl) { +vfloat64m4_t test_vle64_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m4_tumu(vm, vd, rs1, vl); } -vfloat64m8_t test_vle64_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, size_t vl) { +vfloat64m8_t test_vle64_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m8_tumu(vm, vd, rs1, vl); } -vint64m1_t test_vle64_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, size_t vl) { +vint64m1_t test_vle64_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m1_tumu(vm, vd, rs1, vl); } -vint64m2_t test_vle64_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, size_t vl) { +vint64m2_t test_vle64_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m2_tumu(vm, vd, rs1, vl); } -vint64m4_t test_vle64_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, size_t vl) { +vint64m4_t test_vle64_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m4_tumu(vm, vd, rs1, vl); } -vint64m8_t test_vle64_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, size_t vl) { +vint64m8_t test_vle64_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m8_tumu(vm, vd, rs1, vl); } -vuint64m1_t test_vle64_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1_t test_vle64_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m1_tumu(vm, vd, rs1, vl); } -vuint64m2_t test_vle64_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2_t test_vle64_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m2_tumu(vm, vd, rs1, vl); } -vuint64m4_t test_vle64_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m4_t test_vle64_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m4_tumu(vm, vd, rs1, vl); } -vuint64m8_t test_vle64_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, size_t vl) { +vuint64m8_t test_vle64_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m8_tumu(vm, vd, rs1, vl); } -vfloat64m1_t test_vle64_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, size_t vl) { +vfloat64m1_t test_vle64_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m1_mu(vm, vd, rs1, vl); } -vfloat64m2_t test_vle64_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, size_t vl) { +vfloat64m2_t test_vle64_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m2_mu(vm, vd, rs1, vl); } -vfloat64m4_t test_vle64_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, size_t vl) { +vfloat64m4_t test_vle64_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m4_mu(vm, vd, rs1, vl); } -vfloat64m8_t test_vle64_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, size_t vl) { +vfloat64m8_t test_vle64_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, size_t vl) { return __riscv_vle64_v_f64m8_mu(vm, vd, rs1, vl); } -vint64m1_t test_vle64_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, size_t vl) { +vint64m1_t test_vle64_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m1_mu(vm, vd, rs1, vl); } -vint64m2_t test_vle64_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, size_t vl) { +vint64m2_t test_vle64_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m2_mu(vm, vd, rs1, vl); } -vint64m4_t test_vle64_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, size_t vl) { +vint64m4_t test_vle64_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vle64_v_i64m4_mu(vm, vd, rs1, vl); } -vint64m8_t test_vle64_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, size_t vl) { +vint64m8_t test_vle64_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vle64_v_i64m8_mu(vm, vd, rs1, vl); } -vuint64m1_t test_vle64_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1_t test_vle64_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m1_mu(vm, vd, rs1, vl); } -vuint64m2_t test_vle64_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2_t test_vle64_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m2_mu(vm, vd, rs1, vl); } -vuint64m4_t test_vle64_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m4_t test_vle64_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m4_mu(vm, vd, rs1, vl); } -vuint64m8_t test_vle64_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, size_t vl) { +vuint64m8_t test_vle64_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vle64_v_u64m8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vle64ff.c index e67484826..4f33528d0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle64ff.c @@ -6,194 +6,278 @@ #include -vfloat64m1_t test_vle64ff_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1_t test_vle64ff_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_f64m1_tu(vd, rs1, new_vl, vl); } -vfloat64m2_t test_vle64ff_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2_t test_vle64ff_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_f64m2_tu(vd, rs1, new_vl, vl); } -vfloat64m4_t test_vle64ff_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m4_t test_vle64ff_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_f64m4_tu(vd, rs1, new_vl, vl); } -vfloat64m8_t test_vle64ff_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m8_t test_vle64ff_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_f64m8_tu(vd, rs1, new_vl, vl); } -vint64m1_t test_vle64ff_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1_t test_vle64ff_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_i64m1_tu(vd, rs1, new_vl, vl); } -vint64m2_t test_vle64ff_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2_t test_vle64ff_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_i64m2_tu(vd, rs1, new_vl, vl); } -vint64m4_t test_vle64ff_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m4_t test_vle64ff_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_i64m4_tu(vd, rs1, new_vl, vl); } -vint64m8_t test_vle64ff_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m8_t test_vle64ff_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_i64m8_tu(vd, rs1, new_vl, vl); } -vuint64m1_t test_vle64ff_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1_t test_vle64ff_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_u64m1_tu(vd, rs1, new_vl, vl); } -vuint64m2_t test_vle64ff_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2_t test_vle64ff_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_u64m2_tu(vd, rs1, new_vl, vl); } -vuint64m4_t test_vle64ff_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m4_t test_vle64ff_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_u64m4_tu(vd, rs1, new_vl, vl); } -vuint64m8_t test_vle64ff_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m8_t test_vle64ff_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle64ff_v_u64m8_tu(vd, rs1, new_vl, vl); } -vfloat64m1_t test_vle64ff_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1_t test_vle64ff_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m1_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m2_t test_vle64ff_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2_t test_vle64ff_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m2_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m4_t test_vle64ff_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m4_t test_vle64ff_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m4_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m8_t test_vle64ff_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m8_t test_vle64ff_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m8_tum(vm, vd, rs1, new_vl, vl); } -vint64m1_t test_vle64ff_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1_t test_vle64ff_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m1_tum(vm, vd, rs1, new_vl, vl); } -vint64m2_t test_vle64ff_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2_t test_vle64ff_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m2_tum(vm, vd, rs1, new_vl, vl); } -vint64m4_t test_vle64ff_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m4_t test_vle64ff_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m4_tum(vm, vd, rs1, new_vl, vl); } -vint64m8_t test_vle64ff_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m8_t test_vle64ff_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m8_tum(vm, vd, rs1, new_vl, vl); } -vuint64m1_t test_vle64ff_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1_t test_vle64ff_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m1_tum(vm, vd, rs1, new_vl, vl); } -vuint64m2_t test_vle64ff_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2_t test_vle64ff_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m2_tum(vm, vd, rs1, new_vl, vl); } -vuint64m4_t test_vle64ff_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m4_t test_vle64ff_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m4_tum(vm, vd, rs1, new_vl, vl); } -vuint64m8_t test_vle64ff_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m8_t test_vle64ff_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m8_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m1_t test_vle64ff_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1_t test_vle64ff_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m1_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m2_t test_vle64ff_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2_t test_vle64ff_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m4_t test_vle64ff_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m4_t test_vle64ff_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m8_t test_vle64ff_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m8_t test_vle64ff_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m8_tumu(vm, vd, rs1, new_vl, vl); } -vint64m1_t test_vle64ff_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1_t test_vle64ff_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m1_tumu(vm, vd, rs1, new_vl, vl); } -vint64m2_t test_vle64ff_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2_t test_vle64ff_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m2_tumu(vm, vd, rs1, new_vl, vl); } -vint64m4_t test_vle64ff_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m4_t test_vle64ff_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m4_tumu(vm, vd, rs1, new_vl, vl); } -vint64m8_t test_vle64ff_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m8_t test_vle64ff_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m8_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m1_t test_vle64ff_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1_t test_vle64ff_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m1_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m2_t test_vle64ff_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2_t test_vle64ff_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m2_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m4_t test_vle64ff_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m4_t test_vle64ff_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m4_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m8_t test_vle64ff_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m8_t test_vle64ff_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m1_t test_vle64ff_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1_t test_vle64ff_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m1_mu(vm, vd, rs1, new_vl, vl); } -vfloat64m2_t test_vle64ff_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2_t test_vle64ff_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m2_mu(vm, vd, rs1, new_vl, vl); } -vfloat64m4_t test_vle64ff_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m4_t test_vle64ff_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m4_mu(vm, vd, rs1, new_vl, vl); } -vfloat64m8_t test_vle64ff_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m8_t test_vle64ff_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_f64m8_mu(vm, vd, rs1, new_vl, vl); } -vint64m1_t test_vle64ff_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1_t test_vle64ff_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m1_mu(vm, vd, rs1, new_vl, vl); } -vint64m2_t test_vle64ff_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2_t test_vle64ff_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m2_mu(vm, vd, rs1, new_vl, vl); } -vint64m4_t test_vle64ff_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m4_t test_vle64ff_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m4_mu(vm, vd, rs1, new_vl, vl); } -vint64m8_t test_vle64ff_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m8_t test_vle64ff_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_i64m8_mu(vm, vd, rs1, new_vl, vl); } -vuint64m1_t test_vle64ff_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1_t test_vle64ff_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m1_mu(vm, vd, rs1, new_vl, vl); } -vuint64m2_t test_vle64ff_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2_t test_vle64ff_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m2_mu(vm, vd, rs1, new_vl, vl); } -vuint64m4_t test_vle64ff_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m4_t test_vle64ff_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m4_mu(vm, vd, rs1, new_vl, vl); } -vuint64m8_t test_vle64ff_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m8_t test_vle64ff_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle64ff_v_u64m8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle8.c b/auto-generated/policy_funcs/llvm-api-tests/vle8.c index 05e543f7e..cb9d473b4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle8.c @@ -34,15 +34,18 @@ vint8m8_t test_vle8_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, size_t vl) { return __riscv_vle8_v_i8m8_tu(vd, rs1, vl); } -vuint8mf8_t test_vle8_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8_t test_vle8_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8mf8_tu(vd, rs1, vl); } -vuint8mf4_t test_vle8_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4_t test_vle8_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8mf4_tu(vd, rs1, vl); } -vuint8mf2_t test_vle8_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2_t test_vle8_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8mf2_tu(vd, rs1, vl); } @@ -62,170 +65,212 @@ vuint8m8_t test_vle8_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8m8_tu(vd, rs1, vl); } -vint8mf8_t test_vle8_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, size_t vl) { +vint8mf8_t test_vle8_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8mf8_tum(vm, vd, rs1, vl); } -vint8mf4_t test_vle8_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, size_t vl) { +vint8mf4_t test_vle8_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8mf4_tum(vm, vd, rs1, vl); } -vint8mf2_t test_vle8_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, size_t vl) { +vint8mf2_t test_vle8_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8mf2_tum(vm, vd, rs1, vl); } -vint8m1_t test_vle8_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, size_t vl) { +vint8m1_t test_vle8_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m1_tum(vm, vd, rs1, vl); } -vint8m2_t test_vle8_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, size_t vl) { +vint8m2_t test_vle8_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m2_tum(vm, vd, rs1, vl); } -vint8m4_t test_vle8_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, size_t vl) { +vint8m4_t test_vle8_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m4_tum(vm, vd, rs1, vl); } -vint8m8_t test_vle8_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, size_t vl) { +vint8m8_t test_vle8_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m8_tum(vm, vd, rs1, vl); } -vuint8mf8_t test_vle8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8_t test_vle8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf8_tum(vm, vd, rs1, vl); } -vuint8mf4_t test_vle8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4_t test_vle8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf4_tum(vm, vd, rs1, vl); } -vuint8mf2_t test_vle8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2_t test_vle8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf2_tum(vm, vd, rs1, vl); } -vuint8m1_t test_vle8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1_t test_vle8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m1_tum(vm, vd, rs1, vl); } -vuint8m2_t test_vle8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2_t test_vle8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m2_tum(vm, vd, rs1, vl); } -vuint8m4_t test_vle8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m4_t test_vle8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m4_tum(vm, vd, rs1, vl); } -vuint8m8_t test_vle8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, size_t vl) { +vuint8m8_t test_vle8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m8_tum(vm, vd, rs1, vl); } -vint8mf8_t test_vle8_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, size_t vl) { +vint8mf8_t test_vle8_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vle8_v_i8mf8_tumu(vm, vd, rs1, vl); } -vint8mf4_t test_vle8_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, size_t vl) { +vint8mf4_t test_vle8_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vle8_v_i8mf4_tumu(vm, vd, rs1, vl); } -vint8mf2_t test_vle8_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, size_t vl) { +vint8mf2_t test_vle8_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vle8_v_i8mf2_tumu(vm, vd, rs1, vl); } -vint8m1_t test_vle8_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, size_t vl) { +vint8m1_t test_vle8_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m1_tumu(vm, vd, rs1, vl); } -vint8m2_t test_vle8_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, size_t vl) { +vint8m2_t test_vle8_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m2_tumu(vm, vd, rs1, vl); } -vint8m4_t test_vle8_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, size_t vl) { +vint8m4_t test_vle8_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m4_tumu(vm, vd, rs1, vl); } -vint8m8_t test_vle8_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, size_t vl) { +vint8m8_t test_vle8_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m8_tumu(vm, vd, rs1, vl); } -vuint8mf8_t test_vle8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8_t test_vle8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf8_tumu(vm, vd, rs1, vl); } -vuint8mf4_t test_vle8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4_t test_vle8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf4_tumu(vm, vd, rs1, vl); } -vuint8mf2_t test_vle8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2_t test_vle8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf2_tumu(vm, vd, rs1, vl); } -vuint8m1_t test_vle8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1_t test_vle8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m1_tumu(vm, vd, rs1, vl); } -vuint8m2_t test_vle8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2_t test_vle8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m2_tumu(vm, vd, rs1, vl); } -vuint8m4_t test_vle8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m4_t test_vle8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m4_tumu(vm, vd, rs1, vl); } -vuint8m8_t test_vle8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, size_t vl) { +vuint8m8_t test_vle8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m8_tumu(vm, vd, rs1, vl); } -vint8mf8_t test_vle8_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, size_t vl) { +vint8mf8_t test_vle8_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8mf8_mu(vm, vd, rs1, vl); } -vint8mf4_t test_vle8_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, size_t vl) { +vint8mf4_t test_vle8_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8mf4_mu(vm, vd, rs1, vl); } -vint8mf2_t test_vle8_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, size_t vl) { +vint8mf2_t test_vle8_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8mf2_mu(vm, vd, rs1, vl); } -vint8m1_t test_vle8_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, size_t vl) { +vint8m1_t test_vle8_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m1_mu(vm, vd, rs1, vl); } -vint8m2_t test_vle8_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, size_t vl) { +vint8m2_t test_vle8_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m2_mu(vm, vd, rs1, vl); } -vint8m4_t test_vle8_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, size_t vl) { +vint8m4_t test_vle8_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m4_mu(vm, vd, rs1, vl); } -vint8m8_t test_vle8_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, size_t vl) { +vint8m8_t test_vle8_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vle8_v_i8m8_mu(vm, vd, rs1, vl); } -vuint8mf8_t test_vle8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8_t test_vle8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf8_mu(vm, vd, rs1, vl); } -vuint8mf4_t test_vle8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4_t test_vle8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf4_mu(vm, vd, rs1, vl); } -vuint8mf2_t test_vle8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2_t test_vle8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vle8_v_u8mf2_mu(vm, vd, rs1, vl); } -vuint8m1_t test_vle8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1_t test_vle8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m1_mu(vm, vd, rs1, vl); } -vuint8m2_t test_vle8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2_t test_vle8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m2_mu(vm, vd, rs1, vl); } -vuint8m4_t test_vle8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m4_t test_vle8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m4_mu(vm, vd, rs1, vl); } -vuint8m8_t test_vle8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, size_t vl) { +vuint8m8_t test_vle8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vle8_v_u8m8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vle8ff.c index 0458c1620..b95ffecd4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle8ff.c @@ -6,226 +6,308 @@ #include -vint8mf8_t test_vle8ff_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8_t test_vle8ff_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8mf8_tu(vd, rs1, new_vl, vl); } -vint8mf4_t test_vle8ff_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4_t test_vle8ff_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8mf4_tu(vd, rs1, new_vl, vl); } -vint8mf2_t test_vle8ff_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2_t test_vle8ff_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8mf2_tu(vd, rs1, new_vl, vl); } -vint8m1_t test_vle8ff_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1_t test_vle8ff_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8m1_tu(vd, rs1, new_vl, vl); } -vint8m2_t test_vle8ff_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2_t test_vle8ff_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8m2_tu(vd, rs1, new_vl, vl); } -vint8m4_t test_vle8ff_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m4_t test_vle8ff_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8m4_tu(vd, rs1, new_vl, vl); } -vint8m8_t test_vle8ff_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m8_t test_vle8ff_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8m8_tu(vd, rs1, new_vl, vl); } -vuint8mf8_t test_vle8ff_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8_t test_vle8ff_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8mf8_tu(vd, rs1, new_vl, vl); } -vuint8mf4_t test_vle8ff_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4_t test_vle8ff_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8mf4_tu(vd, rs1, new_vl, vl); } -vuint8mf2_t test_vle8ff_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2_t test_vle8ff_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8mf2_tu(vd, rs1, new_vl, vl); } -vuint8m1_t test_vle8ff_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1_t test_vle8ff_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8m1_tu(vd, rs1, new_vl, vl); } -vuint8m2_t test_vle8ff_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2_t test_vle8ff_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8m2_tu(vd, rs1, new_vl, vl); } -vuint8m4_t test_vle8ff_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m4_t test_vle8ff_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8m4_tu(vd, rs1, new_vl, vl); } -vuint8m8_t test_vle8ff_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m8_t test_vle8ff_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8m8_tu(vd, rs1, new_vl, vl); } -vint8mf8_t test_vle8ff_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8_t test_vle8ff_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf8_tum(vm, vd, rs1, new_vl, vl); } -vint8mf4_t test_vle8ff_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4_t test_vle8ff_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf4_tum(vm, vd, rs1, new_vl, vl); } -vint8mf2_t test_vle8ff_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2_t test_vle8ff_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf2_tum(vm, vd, rs1, new_vl, vl); } -vint8m1_t test_vle8ff_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1_t test_vle8ff_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m1_tum(vm, vd, rs1, new_vl, vl); } -vint8m2_t test_vle8ff_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2_t test_vle8ff_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m2_tum(vm, vd, rs1, new_vl, vl); } -vint8m4_t test_vle8ff_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m4_t test_vle8ff_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m4_tum(vm, vd, rs1, new_vl, vl); } -vint8m8_t test_vle8ff_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m8_t test_vle8ff_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m8_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf8_t test_vle8ff_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8_t test_vle8ff_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf8_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf4_t test_vle8ff_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4_t test_vle8ff_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf4_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf2_t test_vle8ff_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2_t test_vle8ff_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf2_tum(vm, vd, rs1, new_vl, vl); } -vuint8m1_t test_vle8ff_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1_t test_vle8ff_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8m1_tum(vm, vd, rs1, new_vl, vl); } -vuint8m2_t test_vle8ff_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2_t test_vle8ff_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8m2_tum(vm, vd, rs1, new_vl, vl); } -vuint8m4_t test_vle8ff_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m4_t test_vle8ff_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8m4_tum(vm, vd, rs1, new_vl, vl); } -vuint8m8_t test_vle8ff_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m8_t test_vle8ff_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8m8_tum(vm, vd, rs1, new_vl, vl); } -vint8mf8_t test_vle8ff_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8_t test_vle8ff_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf8_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf4_t test_vle8ff_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4_t test_vle8ff_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf4_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf2_t test_vle8ff_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2_t test_vle8ff_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf2_tumu(vm, vd, rs1, new_vl, vl); } -vint8m1_t test_vle8ff_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1_t test_vle8ff_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m1_tumu(vm, vd, rs1, new_vl, vl); } -vint8m2_t test_vle8ff_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2_t test_vle8ff_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m2_tumu(vm, vd, rs1, new_vl, vl); } -vint8m4_t test_vle8ff_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m4_t test_vle8ff_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m4_tumu(vm, vd, rs1, new_vl, vl); } -vint8m8_t test_vle8ff_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m8_t test_vle8ff_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m8_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf8_t test_vle8ff_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8_t test_vle8ff_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf8_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf4_t test_vle8ff_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4_t test_vle8ff_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf4_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf2_t test_vle8ff_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2_t test_vle8ff_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf2_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m1_t test_vle8ff_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1_t test_vle8ff_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8m1_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m2_t test_vle8ff_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2_t test_vle8ff_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8m2_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m4_t test_vle8ff_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m4_t test_vle8ff_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8m4_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m8_t test_vle8ff_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m8_t test_vle8ff_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8m8_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf8_t test_vle8ff_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8_t test_vle8ff_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf8_mu(vm, vd, rs1, new_vl, vl); } -vint8mf4_t test_vle8ff_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4_t test_vle8ff_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf4_mu(vm, vd, rs1, new_vl, vl); } -vint8mf2_t test_vle8ff_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2_t test_vle8ff_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_i8mf2_mu(vm, vd, rs1, new_vl, vl); } -vint8m1_t test_vle8ff_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1_t test_vle8ff_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m1_mu(vm, vd, rs1, new_vl, vl); } -vint8m2_t test_vle8ff_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2_t test_vle8ff_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m2_mu(vm, vd, rs1, new_vl, vl); } -vint8m4_t test_vle8ff_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m4_t test_vle8ff_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m4_mu(vm, vd, rs1, new_vl, vl); } -vint8m8_t test_vle8ff_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m8_t test_vle8ff_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_i8m8_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf8_t test_vle8ff_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8_t test_vle8ff_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf8_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf4_t test_vle8ff_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4_t test_vle8ff_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf4_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf2_t test_vle8ff_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2_t test_vle8ff_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle8ff_v_u8mf2_mu(vm, vd, rs1, new_vl, vl); } -vuint8m1_t test_vle8ff_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1_t test_vle8ff_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8m1_mu(vm, vd, rs1, new_vl, vl); } -vuint8m2_t test_vle8ff_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2_t test_vle8ff_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8m2_mu(vm, vd, rs1, new_vl, vl); } -vuint8m4_t test_vle8ff_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m4_t test_vle8ff_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8m4_mu(vm, vd, rs1, new_vl, vl); } -vuint8m8_t test_vle8ff_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m8_t test_vle8ff_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle8ff_v_u8m8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxei16.c index b653e494d..a03b3dd75 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxei16.c @@ -6,914 +6,1307 @@ #include -vfloat16mf4_t test_vloxei16_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei16_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei16_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei16_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei16_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1_t test_vloxei16_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei16_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2_t test_vloxei16_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei16_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4_t test_vloxei16_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_f16m4_tu(vd, rs1, rs2, vl); } -vfloat16m8_t test_vloxei16_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, vuint16m8_t rs2, size_t vl) { +vfloat16m8_t test_vloxei16_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxei16_v_f16m8_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei16_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei16_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei16_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1_t test_vloxei16_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei16_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2_t test_vloxei16_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei16_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4_t test_vloxei16_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei16_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, vuint16m4_t rs2, size_t vl) { +vfloat32m8_t test_vloxei16_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_f32m8_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei16_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1_t test_vloxei16_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei16_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2_t test_vloxei16_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei16_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4_t test_vloxei16_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei16_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, vuint16m2_t rs2, size_t vl) { +vfloat64m8_t test_vloxei16_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_f64m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei16_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8_t test_vloxei16_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei16_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4_t test_vloxei16_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei16_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2_t test_vloxei16_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vloxei16_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1_t test_vloxei16_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m1_tu(vd, rs1, rs2, vl); } -vint8m2_t test_vloxei16_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2_t test_vloxei16_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m2_tu(vd, rs1, rs2, vl); } -vint8m4_t test_vloxei16_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4_t test_vloxei16_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m4_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei16_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4_t test_vloxei16_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei16_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2_t test_vloxei16_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vloxei16_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1_t test_vloxei16_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vloxei16_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2_t test_vloxei16_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_i16m2_tu(vd, rs1, rs2, vl); } -vint16m4_t test_vloxei16_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4_t test_vloxei16_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_i16m4_tu(vd, rs1, rs2, vl); } -vint16m8_t test_vloxei16_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, vuint16m8_t rs2, size_t vl) { +vint16m8_t test_vloxei16_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxei16_v_i16m8_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei16_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2_t test_vloxei16_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vloxei16_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1_t test_vloxei16_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vloxei16_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2_t test_vloxei16_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vloxei16_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4_t test_vloxei16_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_i32m4_tu(vd, rs1, rs2, vl); } -vint32m8_t test_vloxei16_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, vuint16m4_t rs2, size_t vl) { +vint32m8_t test_vloxei16_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_i32m8_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vloxei16_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1_t test_vloxei16_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vloxei16_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2_t test_vloxei16_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vloxei16_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4_t test_vloxei16_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vloxei16_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, vuint16m2_t rs2, size_t vl) { +vint64m8_t test_vloxei16_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei16_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8_t test_vloxei16_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei16_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4_t test_vloxei16_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei16_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2_t test_vloxei16_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei16_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1_t test_vloxei16_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei16_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2_t test_vloxei16_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_u8m2_tu(vd, rs1, rs2, vl); } -vuint8m4_t test_vloxei16_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4_t test_vloxei16_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxei16_v_u8m4_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei16_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4_t test_vloxei16_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei16_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2_t test_vloxei16_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei16_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1_t test_vloxei16_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei16_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2_t test_vloxei16_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei16_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4_t test_vloxei16_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_u16m4_tu(vd, rs1, rs2, vl); } -vuint16m8_t test_vloxei16_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint16m8_t test_vloxei16_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxei16_v_u16m8_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei16_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2_t test_vloxei16_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei16_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1_t test_vloxei16_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei16_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2_t test_vloxei16_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei16_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4_t test_vloxei16_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei16_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint32m8_t test_vloxei16_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_u32m8_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei16_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1_t test_vloxei16_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei16_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2_t test_vloxei16_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei16_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4_t test_vloxei16_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei16_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint64m8_t test_vloxei16_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei16_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei16_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei16_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei16_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei16_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1_t test_vloxei16_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei16_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2_t test_vloxei16_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei16_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4_t test_vloxei16_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vloxei16_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint16m8_t rs2, size_t vl) { +vfloat16m8_t test_vloxei16_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei16_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei16_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei16_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1_t test_vloxei16_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei16_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2_t test_vloxei16_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei16_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4_t test_vloxei16_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei16_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint16m4_t rs2, size_t vl) { +vfloat32m8_t test_vloxei16_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei16_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1_t test_vloxei16_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei16_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2_t test_vloxei16_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei16_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4_t test_vloxei16_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei16_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint16m2_t rs2, size_t vl) { +vfloat64m8_t test_vloxei16_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei16_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8_t test_vloxei16_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei16_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4_t test_vloxei16_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei16_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2_t test_vloxei16_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei16_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1_t test_vloxei16_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei16_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2_t test_vloxei16_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m2_tum(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vloxei16_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4_t test_vloxei16_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei16_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4_t test_vloxei16_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei16_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2_t test_vloxei16_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei16_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1_t test_vloxei16_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei16_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2_t test_vloxei16_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei16_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4_t test_vloxei16_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m4_tum(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vloxei16_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint16m8_t rs2, size_t vl) { +vint16m8_t test_vloxei16_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei16_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2_t test_vloxei16_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei16_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1_t test_vloxei16_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei16_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2_t test_vloxei16_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei16_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4_t test_vloxei16_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei16_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint16m4_t rs2, size_t vl) { +vint32m8_t test_vloxei16_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m8_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei16_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1_t test_vloxei16_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei16_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2_t test_vloxei16_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei16_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4_t test_vloxei16_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei16_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint16m2_t rs2, size_t vl) { +vint64m8_t test_vloxei16_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei16_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8_t test_vloxei16_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei16_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4_t test_vloxei16_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei16_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2_t test_vloxei16_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei16_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1_t test_vloxei16_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei16_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2_t test_vloxei16_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vloxei16_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4_t test_vloxei16_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei16_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4_t test_vloxei16_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei16_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2_t test_vloxei16_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei16_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1_t test_vloxei16_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei16_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2_t test_vloxei16_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei16_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4_t test_vloxei16_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m4_tum(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vloxei16_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint16m8_t test_vloxei16_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei16_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2_t test_vloxei16_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei16_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1_t test_vloxei16_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei16_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2_t test_vloxei16_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei16_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4_t test_vloxei16_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei16_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint32m8_t test_vloxei16_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei16_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1_t test_vloxei16_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei16_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2_t test_vloxei16_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei16_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4_t test_vloxei16_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei16_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint64m8_t test_vloxei16_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei16_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei16_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei16_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei16_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei16_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1_t test_vloxei16_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei16_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2_t test_vloxei16_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei16_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4_t test_vloxei16_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vloxei16_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint16m8_t rs2, size_t vl) { +vfloat16m8_t test_vloxei16_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei16_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei16_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei16_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1_t test_vloxei16_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei16_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2_t test_vloxei16_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei16_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4_t test_vloxei16_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei16_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint16m4_t rs2, size_t vl) { +vfloat32m8_t test_vloxei16_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei16_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1_t test_vloxei16_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei16_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2_t test_vloxei16_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei16_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4_t test_vloxei16_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei16_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint16m2_t rs2, size_t vl) { +vfloat64m8_t test_vloxei16_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei16_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8_t test_vloxei16_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei16_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4_t test_vloxei16_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei16_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2_t test_vloxei16_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei16_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1_t test_vloxei16_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei16_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2_t test_vloxei16_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8m2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vloxei16_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4_t test_vloxei16_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, + const int8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8m4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei16_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4_t test_vloxei16_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei16_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2_t test_vloxei16_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei16_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1_t test_vloxei16_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei16_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2_t test_vloxei16_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei16_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4_t test_vloxei16_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m4_tumu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vloxei16_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint16m8_t rs2, size_t vl) { +vint16m8_t test_vloxei16_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei16_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2_t test_vloxei16_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei16_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1_t test_vloxei16_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei16_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2_t test_vloxei16_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei16_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4_t test_vloxei16_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei16_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint16m4_t rs2, size_t vl) { +vint32m8_t test_vloxei16_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei16_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1_t test_vloxei16_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei16_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2_t test_vloxei16_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei16_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4_t test_vloxei16_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei16_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint16m2_t rs2, size_t vl) { +vint64m8_t test_vloxei16_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei16_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8_t test_vloxei16_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei16_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4_t test_vloxei16_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei16_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2_t test_vloxei16_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei16_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1_t test_vloxei16_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei16_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2_t test_vloxei16_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vloxei16_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4_t test_vloxei16_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei16_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4_t test_vloxei16_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei16_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2_t test_vloxei16_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei16_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1_t test_vloxei16_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei16_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2_t test_vloxei16_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei16_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4_t test_vloxei16_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vloxei16_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint16m8_t test_vloxei16_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei16_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2_t test_vloxei16_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei16_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1_t test_vloxei16_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei16_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2_t test_vloxei16_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei16_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4_t test_vloxei16_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei16_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint32m8_t test_vloxei16_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei16_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1_t test_vloxei16_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei16_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2_t test_vloxei16_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei16_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4_t test_vloxei16_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei16_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint64m8_t test_vloxei16_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei16_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei16_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei16_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei16_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei16_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1_t test_vloxei16_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei16_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2_t test_vloxei16_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei16_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4_t test_vloxei16_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vloxei16_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint16m8_t rs2, size_t vl) { +vfloat16m8_t test_vloxei16_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_f16m8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei16_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei16_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei16_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1_t test_vloxei16_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei16_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2_t test_vloxei16_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei16_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4_t test_vloxei16_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei16_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint16m4_t rs2, size_t vl) { +vfloat32m8_t test_vloxei16_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f32m8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei16_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1_t test_vloxei16_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei16_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2_t test_vloxei16_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei16_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4_t test_vloxei16_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei16_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint16m2_t rs2, size_t vl) { +vfloat64m8_t test_vloxei16_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei16_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8_t test_vloxei16_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei16_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4_t test_vloxei16_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei16_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2_t test_vloxei16_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei16_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1_t test_vloxei16_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei16_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2_t test_vloxei16_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m2_mu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vloxei16_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4_t test_vloxei16_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxei16_v_i8m4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei16_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4_t test_vloxei16_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei16_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2_t test_vloxei16_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei16_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1_t test_vloxei16_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei16_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2_t test_vloxei16_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei16_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4_t test_vloxei16_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m4_mu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vloxei16_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint16m8_t rs2, size_t vl) { +vint16m8_t test_vloxei16_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_i16m8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei16_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2_t test_vloxei16_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei16_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1_t test_vloxei16_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei16_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2_t test_vloxei16_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei16_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4_t test_vloxei16_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei16_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint16m4_t rs2, size_t vl) { +vint32m8_t test_vloxei16_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i32m8_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei16_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1_t test_vloxei16_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei16_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2_t test_vloxei16_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei16_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4_t test_vloxei16_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei16_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint16m2_t rs2, size_t vl) { +vint64m8_t test_vloxei16_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei16_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8_t test_vloxei16_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei16_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4_t test_vloxei16_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei16_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2_t test_vloxei16_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei16_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1_t test_vloxei16_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei16_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2_t test_vloxei16_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vloxei16_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4_t test_vloxei16_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_u8m4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei16_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4_t test_vloxei16_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei16_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2_t test_vloxei16_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei16_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1_t test_vloxei16_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei16_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2_t test_vloxei16_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei16_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4_t test_vloxei16_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m4_mu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vloxei16_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint16m8_t test_vloxei16_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_u16m8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei16_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2_t test_vloxei16_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei16_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1_t test_vloxei16_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei16_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2_t test_vloxei16_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei16_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4_t test_vloxei16_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei16_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint32m8_t test_vloxei16_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u32m8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei16_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1_t test_vloxei16_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei16_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2_t test_vloxei16_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei16_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4_t test_vloxei16_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei16_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint64m8_t test_vloxei16_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxei32.c index a5da2f9f7..0fa2a2b4b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxei32.c @@ -6,834 +6,1194 @@ #include -vfloat16mf4_t test_vloxei32_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei32_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei32_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei32_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei32_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1_t test_vloxei32_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei32_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2_t test_vloxei32_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei32_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4_t test_vloxei32_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_f16m4_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei32_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei32_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei32_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1_t test_vloxei32_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei32_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2_t test_vloxei32_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei32_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4_t test_vloxei32_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei32_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, vuint32m8_t rs2, size_t vl) { +vfloat32m8_t test_vloxei32_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_f32m8_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei32_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1_t test_vloxei32_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei32_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2_t test_vloxei32_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei32_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4_t test_vloxei32_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei32_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, vuint32m4_t rs2, size_t vl) { +vfloat64m8_t test_vloxei32_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_f64m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei32_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8_t test_vloxei32_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei32_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4_t test_vloxei32_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei32_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2_t test_vloxei32_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vloxei32_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1_t test_vloxei32_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_i8m1_tu(vd, rs1, rs2, vl); } -vint8m2_t test_vloxei32_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2_t test_vloxei32_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_i8m2_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei32_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4_t test_vloxei32_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei32_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2_t test_vloxei32_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vloxei32_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1_t test_vloxei32_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vloxei32_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2_t test_vloxei32_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_i16m2_tu(vd, rs1, rs2, vl); } -vint16m4_t test_vloxei32_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4_t test_vloxei32_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_i16m4_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei32_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2_t test_vloxei32_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vloxei32_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1_t test_vloxei32_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vloxei32_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2_t test_vloxei32_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vloxei32_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4_t test_vloxei32_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_i32m4_tu(vd, rs1, rs2, vl); } -vint32m8_t test_vloxei32_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, vuint32m8_t rs2, size_t vl) { +vint32m8_t test_vloxei32_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_i32m8_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vloxei32_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1_t test_vloxei32_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vloxei32_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2_t test_vloxei32_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vloxei32_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4_t test_vloxei32_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vloxei32_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, vuint32m4_t rs2, size_t vl) { +vint64m8_t test_vloxei32_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei32_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8_t test_vloxei32_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei32_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4_t test_vloxei32_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei32_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2_t test_vloxei32_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei32_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1_t test_vloxei32_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei32_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2_t test_vloxei32_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_u8m2_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei32_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4_t test_vloxei32_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei32_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2_t test_vloxei32_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei32_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1_t test_vloxei32_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei32_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2_t test_vloxei32_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei32_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4_t test_vloxei32_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_u16m4_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei32_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2_t test_vloxei32_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei32_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1_t test_vloxei32_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei32_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2_t test_vloxei32_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei32_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4_t test_vloxei32_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei32_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint32m8_t test_vloxei32_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_u32m8_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei32_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1_t test_vloxei32_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxei32_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei32_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2_t test_vloxei32_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxei32_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei32_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4_t test_vloxei32_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxei32_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei32_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint64m8_t test_vloxei32_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei32_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei32_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei32_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei32_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei32_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1_t test_vloxei32_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei32_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2_t test_vloxei32_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei32_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4_t test_vloxei32_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei32_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei32_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei32_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1_t test_vloxei32_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei32_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2_t test_vloxei32_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei32_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4_t test_vloxei32_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei32_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint32m8_t rs2, size_t vl) { +vfloat32m8_t test_vloxei32_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei32_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1_t test_vloxei32_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei32_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2_t test_vloxei32_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei32_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4_t test_vloxei32_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei32_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint32m4_t rs2, size_t vl) { +vfloat64m8_t test_vloxei32_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei32_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8_t test_vloxei32_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei32_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4_t test_vloxei32_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei32_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2_t test_vloxei32_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei32_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1_t test_vloxei32_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei32_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2_t test_vloxei32_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_i8m2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei32_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4_t test_vloxei32_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei32_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2_t test_vloxei32_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei32_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1_t test_vloxei32_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei32_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2_t test_vloxei32_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei32_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4_t test_vloxei32_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei32_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2_t test_vloxei32_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei32_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1_t test_vloxei32_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei32_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2_t test_vloxei32_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei32_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4_t test_vloxei32_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei32_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint32m8_t rs2, size_t vl) { +vint32m8_t test_vloxei32_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m8_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei32_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1_t test_vloxei32_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei32_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2_t test_vloxei32_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei32_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4_t test_vloxei32_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei32_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint32m4_t rs2, size_t vl) { +vint64m8_t test_vloxei32_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei32_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8_t test_vloxei32_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei32_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4_t test_vloxei32_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei32_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2_t test_vloxei32_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei32_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1_t test_vloxei32_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei32_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2_t test_vloxei32_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8m2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei32_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4_t test_vloxei32_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei32_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2_t test_vloxei32_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei32_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1_t test_vloxei32_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei32_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2_t test_vloxei32_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei32_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4_t test_vloxei32_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei32_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2_t test_vloxei32_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei32_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1_t test_vloxei32_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei32_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2_t test_vloxei32_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei32_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4_t test_vloxei32_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei32_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint32m8_t test_vloxei32_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei32_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1_t test_vloxei32_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei32_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2_t test_vloxei32_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei32_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4_t test_vloxei32_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei32_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint64m8_t test_vloxei32_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei32_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei32_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei32_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei32_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei32_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1_t test_vloxei32_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei32_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2_t test_vloxei32_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei32_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4_t test_vloxei32_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei32_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei32_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei32_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1_t test_vloxei32_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei32_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2_t test_vloxei32_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei32_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4_t test_vloxei32_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei32_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint32m8_t rs2, size_t vl) { +vfloat32m8_t test_vloxei32_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei32_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1_t test_vloxei32_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei32_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2_t test_vloxei32_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei32_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4_t test_vloxei32_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei32_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint32m4_t rs2, size_t vl) { +vfloat64m8_t test_vloxei32_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei32_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8_t test_vloxei32_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei32_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4_t test_vloxei32_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei32_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2_t test_vloxei32_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei32_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1_t test_vloxei32_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei32_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2_t test_vloxei32_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8m2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei32_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4_t test_vloxei32_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei32_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2_t test_vloxei32_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei32_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1_t test_vloxei32_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei32_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2_t test_vloxei32_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei32_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4_t test_vloxei32_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei32_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2_t test_vloxei32_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei32_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1_t test_vloxei32_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei32_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2_t test_vloxei32_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei32_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4_t test_vloxei32_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei32_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint32m8_t rs2, size_t vl) { +vint32m8_t test_vloxei32_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei32_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1_t test_vloxei32_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei32_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2_t test_vloxei32_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei32_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4_t test_vloxei32_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei32_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint32m4_t rs2, size_t vl) { +vint64m8_t test_vloxei32_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei32_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8_t test_vloxei32_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei32_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4_t test_vloxei32_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei32_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2_t test_vloxei32_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei32_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1_t test_vloxei32_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei32_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2_t test_vloxei32_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei32_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4_t test_vloxei32_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei32_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2_t test_vloxei32_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei32_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1_t test_vloxei32_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei32_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2_t test_vloxei32_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei32_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4_t test_vloxei32_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei32_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2_t test_vloxei32_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei32_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1_t test_vloxei32_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei32_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2_t test_vloxei32_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei32_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4_t test_vloxei32_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei32_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint32m8_t test_vloxei32_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei32_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1_t test_vloxei32_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei32_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2_t test_vloxei32_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei32_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4_t test_vloxei32_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei32_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint64m8_t test_vloxei32_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei32_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei32_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei32_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei32_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei32_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1_t test_vloxei32_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei32_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2_t test_vloxei32_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei32_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4_t test_vloxei32_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_f16m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei32_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei32_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei32_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1_t test_vloxei32_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei32_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2_t test_vloxei32_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei32_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4_t test_vloxei32_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei32_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint32m8_t rs2, size_t vl) { +vfloat32m8_t test_vloxei32_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_f32m8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei32_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1_t test_vloxei32_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei32_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2_t test_vloxei32_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei32_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4_t test_vloxei32_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei32_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint32m4_t rs2, size_t vl) { +vfloat64m8_t test_vloxei32_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei32_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8_t test_vloxei32_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei32_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4_t test_vloxei32_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei32_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2_t test_vloxei32_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei32_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1_t test_vloxei32_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxei32_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei32_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2_t test_vloxei32_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxei32_v_i8m2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei32_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4_t test_vloxei32_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei32_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2_t test_vloxei32_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei32_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1_t test_vloxei32_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei32_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2_t test_vloxei32_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei32_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4_t test_vloxei32_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_i16m4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei32_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2_t test_vloxei32_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei32_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1_t test_vloxei32_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei32_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2_t test_vloxei32_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei32_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4_t test_vloxei32_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei32_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint32m8_t rs2, size_t vl) { +vint32m8_t test_vloxei32_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_i32m8_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei32_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1_t test_vloxei32_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei32_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2_t test_vloxei32_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei32_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4_t test_vloxei32_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei32_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint32m4_t rs2, size_t vl) { +vint64m8_t test_vloxei32_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei32_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8_t test_vloxei32_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei32_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4_t test_vloxei32_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei32_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2_t test_vloxei32_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei32_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1_t test_vloxei32_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei32_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2_t test_vloxei32_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u8m2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei32_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4_t test_vloxei32_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei32_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2_t test_vloxei32_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei32_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1_t test_vloxei32_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei32_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2_t test_vloxei32_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei32_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4_t test_vloxei32_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u16m4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei32_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2_t test_vloxei32_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei32_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1_t test_vloxei32_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei32_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2_t test_vloxei32_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei32_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4_t test_vloxei32_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei32_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint32m8_t test_vloxei32_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxei32_v_u32m8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei32_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1_t test_vloxei32_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei32_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2_t test_vloxei32_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei32_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4_t test_vloxei32_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei32_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint64m8_t test_vloxei32_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxei32_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxei64.c index 008d38156..fdfe8e95a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxei64.c @@ -6,706 +6,1012 @@ #include -vfloat16mf4_t test_vloxei64_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei64_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei64_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei64_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei64_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1_t test_vloxei64_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei64_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2_t test_vloxei64_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei64_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei64_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei64_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1_t test_vloxei64_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei64_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2_t test_vloxei64_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei64_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4_t test_vloxei64_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei64_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1_t test_vloxei64_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei64_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2_t test_vloxei64_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei64_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4_t test_vloxei64_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei64_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, vuint64m8_t rs2, size_t vl) { +vfloat64m8_t test_vloxei64_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_f64m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei64_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8_t test_vloxei64_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei64_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4_t test_vloxei64_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei64_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2_t test_vloxei64_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vloxei64_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1_t test_vloxei64_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_i8m1_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei64_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4_t test_vloxei64_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei64_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2_t test_vloxei64_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vloxei64_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1_t test_vloxei64_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vloxei64_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2_t test_vloxei64_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_i16m2_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei64_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2_t test_vloxei64_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vloxei64_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1_t test_vloxei64_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vloxei64_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2_t test_vloxei64_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vloxei64_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4_t test_vloxei64_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_i32m4_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vloxei64_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1_t test_vloxei64_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vloxei64_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2_t test_vloxei64_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vloxei64_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4_t test_vloxei64_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vloxei64_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, vuint64m8_t rs2, size_t vl) { +vint64m8_t test_vloxei64_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei64_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8_t test_vloxei64_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei64_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4_t test_vloxei64_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei64_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2_t test_vloxei64_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei64_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1_t test_vloxei64_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei64_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4_t test_vloxei64_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei64_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2_t test_vloxei64_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei64_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1_t test_vloxei64_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei64_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2_t test_vloxei64_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei64_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2_t test_vloxei64_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei64_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1_t test_vloxei64_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei64_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2_t test_vloxei64_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei64_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4_t test_vloxei64_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei64_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1_t test_vloxei64_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxei64_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei64_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2_t test_vloxei64_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxei64_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei64_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4_t test_vloxei64_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxei64_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei64_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint64m8_t test_vloxei64_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei64_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei64_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei64_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei64_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei64_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1_t test_vloxei64_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei64_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2_t test_vloxei64_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei64_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei64_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei64_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1_t test_vloxei64_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei64_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2_t test_vloxei64_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei64_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4_t test_vloxei64_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei64_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1_t test_vloxei64_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei64_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2_t test_vloxei64_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei64_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4_t test_vloxei64_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei64_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint64m8_t rs2, size_t vl) { +vfloat64m8_t test_vloxei64_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei64_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8_t test_vloxei64_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei64_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4_t test_vloxei64_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei64_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2_t test_vloxei64_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei64_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1_t test_vloxei64_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei64_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4_t test_vloxei64_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei64_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2_t test_vloxei64_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei64_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1_t test_vloxei64_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei64_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2_t test_vloxei64_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei64_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2_t test_vloxei64_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei64_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1_t test_vloxei64_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei64_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2_t test_vloxei64_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei64_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4_t test_vloxei64_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei64_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1_t test_vloxei64_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei64_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2_t test_vloxei64_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei64_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4_t test_vloxei64_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei64_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint64m8_t rs2, size_t vl) { +vint64m8_t test_vloxei64_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei64_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8_t test_vloxei64_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei64_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4_t test_vloxei64_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei64_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2_t test_vloxei64_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei64_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1_t test_vloxei64_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei64_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4_t test_vloxei64_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei64_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2_t test_vloxei64_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei64_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1_t test_vloxei64_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei64_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2_t test_vloxei64_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei64_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2_t test_vloxei64_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei64_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1_t test_vloxei64_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei64_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2_t test_vloxei64_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei64_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4_t test_vloxei64_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei64_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1_t test_vloxei64_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei64_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2_t test_vloxei64_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei64_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4_t test_vloxei64_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei64_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint64m8_t test_vloxei64_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei64_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei64_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei64_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei64_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei64_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1_t test_vloxei64_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei64_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2_t test_vloxei64_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei64_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei64_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei64_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1_t test_vloxei64_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei64_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2_t test_vloxei64_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei64_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4_t test_vloxei64_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei64_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1_t test_vloxei64_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei64_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2_t test_vloxei64_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei64_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4_t test_vloxei64_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei64_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint64m8_t rs2, size_t vl) { +vfloat64m8_t test_vloxei64_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei64_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8_t test_vloxei64_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei64_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4_t test_vloxei64_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei64_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2_t test_vloxei64_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei64_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1_t test_vloxei64_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei64_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4_t test_vloxei64_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei64_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2_t test_vloxei64_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei64_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1_t test_vloxei64_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei64_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2_t test_vloxei64_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei64_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2_t test_vloxei64_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei64_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1_t test_vloxei64_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei64_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2_t test_vloxei64_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei64_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4_t test_vloxei64_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei64_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1_t test_vloxei64_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei64_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2_t test_vloxei64_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei64_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4_t test_vloxei64_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei64_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint64m8_t rs2, size_t vl) { +vint64m8_t test_vloxei64_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei64_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8_t test_vloxei64_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei64_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4_t test_vloxei64_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei64_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2_t test_vloxei64_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei64_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1_t test_vloxei64_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei64_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4_t test_vloxei64_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei64_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2_t test_vloxei64_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei64_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1_t test_vloxei64_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei64_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2_t test_vloxei64_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei64_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2_t test_vloxei64_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei64_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1_t test_vloxei64_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei64_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2_t test_vloxei64_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei64_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4_t test_vloxei64_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei64_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1_t test_vloxei64_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei64_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2_t test_vloxei64_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei64_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4_t test_vloxei64_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei64_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint64m8_t test_vloxei64_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei64_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei64_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei64_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei64_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei64_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1_t test_vloxei64_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei64_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2_t test_vloxei64_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei64_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei64_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei64_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1_t test_vloxei64_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei64_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2_t test_vloxei64_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei64_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4_t test_vloxei64_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei64_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1_t test_vloxei64_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei64_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2_t test_vloxei64_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei64_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4_t test_vloxei64_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei64_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint64m8_t rs2, size_t vl) { +vfloat64m8_t test_vloxei64_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei64_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8_t test_vloxei64_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei64_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4_t test_vloxei64_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei64_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2_t test_vloxei64_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei64_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1_t test_vloxei64_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxei64_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei64_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4_t test_vloxei64_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei64_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2_t test_vloxei64_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei64_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1_t test_vloxei64_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei64_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2_t test_vloxei64_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei64_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2_t test_vloxei64_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei64_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1_t test_vloxei64_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei64_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2_t test_vloxei64_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei64_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4_t test_vloxei64_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei64_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1_t test_vloxei64_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei64_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2_t test_vloxei64_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei64_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4_t test_vloxei64_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei64_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint64m8_t rs2, size_t vl) { +vint64m8_t test_vloxei64_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei64_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8_t test_vloxei64_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei64_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4_t test_vloxei64_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei64_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2_t test_vloxei64_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei64_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1_t test_vloxei64_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei64_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4_t test_vloxei64_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei64_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2_t test_vloxei64_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei64_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1_t test_vloxei64_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei64_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2_t test_vloxei64_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei64_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2_t test_vloxei64_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei64_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1_t test_vloxei64_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei64_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2_t test_vloxei64_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei64_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4_t test_vloxei64_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei64_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1_t test_vloxei64_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei64_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2_t test_vloxei64_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei64_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4_t test_vloxei64_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei64_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint64m8_t test_vloxei64_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxei64_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxei8.c index 1af6991d2..229f3c1fe 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxei8.c @@ -6,946 +6,1347 @@ #include -vfloat16mf4_t test_vloxei8_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei8_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei8_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei8_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei8_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1_t test_vloxei8_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei8_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2_t test_vloxei8_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei8_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4_t test_vloxei8_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_f16m4_tu(vd, rs1, rs2, vl); } -vfloat16m8_t test_vloxei8_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, vuint8m4_t rs2, size_t vl) { +vfloat16m8_t test_vloxei8_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxei8_v_f16m8_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei8_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei8_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei8_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1_t test_vloxei8_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei8_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2_t test_vloxei8_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei8_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4_t test_vloxei8_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei8_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, vuint8m2_t rs2, size_t vl) { +vfloat32m8_t test_vloxei8_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_f32m8_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei8_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1_t test_vloxei8_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei8_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2_t test_vloxei8_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei8_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4_t test_vloxei8_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei8_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, vuint8m1_t rs2, size_t vl) { +vfloat64m8_t test_vloxei8_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_f64m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei8_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8_t test_vloxei8_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei8_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4_t test_vloxei8_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei8_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2_t test_vloxei8_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vloxei8_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1_t test_vloxei8_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m1_tu(vd, rs1, rs2, vl); } -vint8m2_t test_vloxei8_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2_t test_vloxei8_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m2_tu(vd, rs1, rs2, vl); } -vint8m4_t test_vloxei8_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4_t test_vloxei8_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m4_tu(vd, rs1, rs2, vl); } -vint8m8_t test_vloxei8_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, vuint8m8_t rs2, size_t vl) { +vint8m8_t test_vloxei8_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m8_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei8_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4_t test_vloxei8_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei8_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2_t test_vloxei8_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vloxei8_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1_t test_vloxei8_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vloxei8_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2_t test_vloxei8_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_i16m2_tu(vd, rs1, rs2, vl); } -vint16m4_t test_vloxei8_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4_t test_vloxei8_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_i16m4_tu(vd, rs1, rs2, vl); } -vint16m8_t test_vloxei8_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, vuint8m4_t rs2, size_t vl) { +vint16m8_t test_vloxei8_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxei8_v_i16m8_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei8_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2_t test_vloxei8_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vloxei8_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1_t test_vloxei8_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vloxei8_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2_t test_vloxei8_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vloxei8_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4_t test_vloxei8_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_i32m4_tu(vd, rs1, rs2, vl); } -vint32m8_t test_vloxei8_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, vuint8m2_t rs2, size_t vl) { +vint32m8_t test_vloxei8_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_i32m8_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vloxei8_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1_t test_vloxei8_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vloxei8_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2_t test_vloxei8_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vloxei8_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4_t test_vloxei8_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vloxei8_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, vuint8m1_t rs2, size_t vl) { +vint64m8_t test_vloxei8_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei8_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8_t test_vloxei8_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei8_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4_t test_vloxei8_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei8_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2_t test_vloxei8_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei8_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1_t test_vloxei8_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei8_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2_t test_vloxei8_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_u8m2_tu(vd, rs1, rs2, vl); } -vuint8m4_t test_vloxei8_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4_t test_vloxei8_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxei8_v_u8m4_tu(vd, rs1, rs2, vl); } -vuint8m8_t test_vloxei8_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, vuint8m8_t rs2, size_t vl) { +vuint8m8_t test_vloxei8_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vloxei8_v_u8m8_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei8_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4_t test_vloxei8_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei8_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2_t test_vloxei8_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei8_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1_t test_vloxei8_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei8_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2_t test_vloxei8_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei8_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4_t test_vloxei8_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_u16m4_tu(vd, rs1, rs2, vl); } -vuint16m8_t test_vloxei8_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint16m8_t test_vloxei8_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxei8_v_u16m8_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei8_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2_t test_vloxei8_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei8_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1_t test_vloxei8_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei8_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2_t test_vloxei8_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei8_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4_t test_vloxei8_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei8_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint32m8_t test_vloxei8_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_u32m8_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei8_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1_t test_vloxei8_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxei8_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei8_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2_t test_vloxei8_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxei8_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei8_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4_t test_vloxei8_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxei8_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei8_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint64m8_t test_vloxei8_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei8_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei8_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei8_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei8_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei8_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1_t test_vloxei8_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei8_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2_t test_vloxei8_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei8_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4_t test_vloxei8_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vloxei8_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint8m4_t rs2, size_t vl) { +vfloat16m8_t test_vloxei8_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei8_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei8_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei8_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1_t test_vloxei8_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei8_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2_t test_vloxei8_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei8_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4_t test_vloxei8_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei8_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint8m2_t rs2, size_t vl) { +vfloat32m8_t test_vloxei8_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei8_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1_t test_vloxei8_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei8_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2_t test_vloxei8_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei8_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4_t test_vloxei8_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei8_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint8m1_t rs2, size_t vl) { +vfloat64m8_t test_vloxei8_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei8_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8_t test_vloxei8_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei8_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4_t test_vloxei8_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei8_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2_t test_vloxei8_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei8_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1_t test_vloxei8_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei8_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2_t test_vloxei8_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m2_tum(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vloxei8_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4_t test_vloxei8_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m4_tum(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vloxei8_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, vuint8m8_t rs2, size_t vl) { +vint8m8_t test_vloxei8_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei8_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4_t test_vloxei8_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei8_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2_t test_vloxei8_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei8_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1_t test_vloxei8_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei8_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2_t test_vloxei8_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei8_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4_t test_vloxei8_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m4_tum(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vloxei8_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint8m4_t rs2, size_t vl) { +vint16m8_t test_vloxei8_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei8_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2_t test_vloxei8_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei8_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1_t test_vloxei8_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei8_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2_t test_vloxei8_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei8_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4_t test_vloxei8_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei8_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint8m2_t rs2, size_t vl) { +vint32m8_t test_vloxei8_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m8_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei8_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1_t test_vloxei8_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei8_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2_t test_vloxei8_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei8_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4_t test_vloxei8_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei8_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint8m1_t rs2, size_t vl) { +vint64m8_t test_vloxei8_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8_t test_vloxei8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4_t test_vloxei8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2_t test_vloxei8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1_t test_vloxei8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2_t test_vloxei8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vloxei8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4_t test_vloxei8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m4_tum(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vloxei8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, vuint8m8_t rs2, size_t vl) { +vuint8m8_t test_vloxei8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, vuint8m8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4_t test_vloxei8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2_t test_vloxei8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1_t test_vloxei8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2_t test_vloxei8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4_t test_vloxei8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m4_tum(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vloxei8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint16m8_t test_vloxei8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2_t test_vloxei8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1_t test_vloxei8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2_t test_vloxei8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4_t test_vloxei8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint32m8_t test_vloxei8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1_t test_vloxei8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2_t test_vloxei8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4_t test_vloxei8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint64m8_t test_vloxei8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei8_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei8_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei8_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei8_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei8_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1_t test_vloxei8_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei8_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2_t test_vloxei8_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei8_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4_t test_vloxei8_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vloxei8_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint8m4_t rs2, size_t vl) { +vfloat16m8_t test_vloxei8_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei8_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei8_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei8_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1_t test_vloxei8_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei8_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2_t test_vloxei8_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei8_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4_t test_vloxei8_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei8_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint8m2_t rs2, size_t vl) { +vfloat32m8_t test_vloxei8_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei8_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1_t test_vloxei8_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei8_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2_t test_vloxei8_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei8_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4_t test_vloxei8_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei8_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint8m1_t rs2, size_t vl) { +vfloat64m8_t test_vloxei8_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei8_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8_t test_vloxei8_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei8_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4_t test_vloxei8_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei8_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2_t test_vloxei8_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei8_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1_t test_vloxei8_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei8_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2_t test_vloxei8_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vloxei8_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4_t test_vloxei8_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m4_tumu(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vloxei8_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, vuint8m8_t rs2, size_t vl) { +vint8m8_t test_vloxei8_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei8_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4_t test_vloxei8_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei8_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2_t test_vloxei8_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei8_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1_t test_vloxei8_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei8_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2_t test_vloxei8_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei8_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4_t test_vloxei8_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m4_tumu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vloxei8_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint8m4_t rs2, size_t vl) { +vint16m8_t test_vloxei8_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei8_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2_t test_vloxei8_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei8_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1_t test_vloxei8_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei8_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2_t test_vloxei8_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei8_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4_t test_vloxei8_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei8_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint8m2_t rs2, size_t vl) { +vint32m8_t test_vloxei8_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei8_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1_t test_vloxei8_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei8_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2_t test_vloxei8_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei8_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4_t test_vloxei8_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei8_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint8m1_t rs2, size_t vl) { +vint64m8_t test_vloxei8_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8_t test_vloxei8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4_t test_vloxei8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2_t test_vloxei8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1_t test_vloxei8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2_t test_vloxei8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vloxei8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4_t test_vloxei8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vloxei8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, vuint8m8_t rs2, size_t vl) { +vuint8m8_t test_vloxei8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, vuint8m8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4_t test_vloxei8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2_t test_vloxei8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1_t test_vloxei8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2_t test_vloxei8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4_t test_vloxei8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vloxei8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint16m8_t test_vloxei8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2_t test_vloxei8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1_t test_vloxei8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2_t test_vloxei8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4_t test_vloxei8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint32m8_t test_vloxei8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1_t test_vloxei8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2_t test_vloxei8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4_t test_vloxei8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint64m8_t test_vloxei8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vloxei8_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4_t test_vloxei8_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vloxei8_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2_t test_vloxei8_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vloxei8_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1_t test_vloxei8_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vloxei8_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2_t test_vloxei8_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vloxei8_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4_t test_vloxei8_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vloxei8_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint8m4_t rs2, size_t vl) { +vfloat16m8_t test_vloxei8_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f16m8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vloxei8_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2_t test_vloxei8_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vloxei8_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1_t test_vloxei8_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vloxei8_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2_t test_vloxei8_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vloxei8_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4_t test_vloxei8_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vloxei8_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint8m2_t rs2, size_t vl) { +vfloat32m8_t test_vloxei8_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f32m8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vloxei8_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1_t test_vloxei8_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vloxei8_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2_t test_vloxei8_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vloxei8_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4_t test_vloxei8_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vloxei8_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint8m1_t rs2, size_t vl) { +vfloat64m8_t test_vloxei8_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vloxei8_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8_t test_vloxei8_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vloxei8_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4_t test_vloxei8_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vloxei8_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2_t test_vloxei8_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vloxei8_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1_t test_vloxei8_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vloxei8_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2_t test_vloxei8_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m2_mu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vloxei8_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4_t test_vloxei8_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m4_mu(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vloxei8_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, vuint8m8_t rs2, size_t vl) { +vint8m8_t test_vloxei8_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vloxei8_v_i8m8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vloxei8_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4_t test_vloxei8_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vloxei8_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2_t test_vloxei8_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vloxei8_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1_t test_vloxei8_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vloxei8_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2_t test_vloxei8_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vloxei8_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4_t test_vloxei8_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m4_mu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vloxei8_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint8m4_t rs2, size_t vl) { +vint16m8_t test_vloxei8_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i16m8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vloxei8_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2_t test_vloxei8_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vloxei8_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1_t test_vloxei8_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vloxei8_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2_t test_vloxei8_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vloxei8_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4_t test_vloxei8_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vloxei8_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint8m2_t rs2, size_t vl) { +vint32m8_t test_vloxei8_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i32m8_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vloxei8_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1_t test_vloxei8_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vloxei8_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2_t test_vloxei8_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vloxei8_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4_t test_vloxei8_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vloxei8_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint8m1_t rs2, size_t vl) { +vint64m8_t test_vloxei8_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vloxei8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8_t test_vloxei8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vloxei8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4_t test_vloxei8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vloxei8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2_t test_vloxei8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vloxei8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1_t test_vloxei8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vloxei8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2_t test_vloxei8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vloxei8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4_t test_vloxei8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m4_mu(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vloxei8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, vuint8m8_t rs2, size_t vl) { +vuint8m8_t test_vloxei8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, vuint8m8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u8m8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vloxei8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4_t test_vloxei8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vloxei8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2_t test_vloxei8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vloxei8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1_t test_vloxei8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vloxei8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2_t test_vloxei8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vloxei8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4_t test_vloxei8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m4_mu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vloxei8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint16m8_t test_vloxei8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u16m8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vloxei8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2_t test_vloxei8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vloxei8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1_t test_vloxei8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vloxei8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2_t test_vloxei8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vloxei8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4_t test_vloxei8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vloxei8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint32m8_t test_vloxei8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u32m8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vloxei8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1_t test_vloxei8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vloxei8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2_t test_vloxei8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vloxei8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4_t test_vloxei8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vloxei8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint64m8_t test_vloxei8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxei8_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c index 0ae0d2c29..b9dc6f126 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c @@ -6,770 +6,1148 @@ #include -vfloat16mf4x2_t test_vloxseg2ei16_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei16_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei16_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei16_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei16_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei16_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei16_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei16_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei16_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei16_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m4x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei16_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei16_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei16_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei16_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei16_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei16_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei16_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei16_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei16_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei16_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei16_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei16_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei16_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei16_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei16_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei16_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei16_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei16_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei16_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei16_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei16_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei16_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei16_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei16_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8m2x2_tu(vd, rs1, rs2, vl); } -vint8m4x2_t test_vloxseg2ei16_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4x2_t test_vloxseg2ei16_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8m4x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei16_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei16_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei16_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei16_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei16_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei16_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei16_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei16_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei16_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei16_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m4x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei16_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei16_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei16_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei16_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei16_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei16_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei16_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei16_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei16_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei16_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei16_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei16_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei16_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei16_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei16_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei16_v_u8mf8x2_tu(vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei16_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei16_v_u8mf4x2_tu(vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei16_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei16_v_u8mf2x2_tu(vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei16_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei16_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei16_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei16_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8m2x2_tu(vd, rs1, rs2, vl); } -vuint8m4x2_t test_vloxseg2ei16_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4x2_t test_vloxseg2ei16_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8m4x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei16_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei16_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei16_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei16_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei16_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei16_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei16_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei16_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei16_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei16_v_u16m4x2_tu(vuint16m4x2_t vd, + const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m4x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei16_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei16_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei16_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei16_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei16_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei16_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei16_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei16_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei16_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei16_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei16_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei16_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei16_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei16_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei16_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei16_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei16_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei16_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei16_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei16_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei16_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei16_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei16_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei16_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei16_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei16_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei16_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei16_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei16_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei16_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei16_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei16_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei16_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei16_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei16_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei16_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei16_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei16_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei16_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei16_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei16_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei16_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei16_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei16_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei16_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei16_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei16_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei16_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vloxseg2ei16_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4x2_t test_vloxseg2ei16_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei16_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei16_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei16_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei16_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei16_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei16_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei16_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei16_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei16_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei16_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m4x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei16_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei16_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei16_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei16_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei16_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei16_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei16_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei16_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei16_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei16_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei16_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei16_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei16_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei16_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei16_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei16_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei16_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei16_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei16_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei16_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei16_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei16_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei16_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei16_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_u8m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vloxseg2ei16_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4x2_t test_vloxseg2ei16_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_u8m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei16_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei16_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei16_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei16_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei16_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei16_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei16_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei16_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei16_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei16_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei16_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei16_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei16_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei16_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei16_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei16_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei16_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei16_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei16_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei16_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei16_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei16_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei16_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei16_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei16_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei16_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei16_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei16_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei16_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei16_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei16_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei16_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei16_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei16_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei16_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei16_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei16_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei16_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei16_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei16_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei16_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei16_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei16_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei16_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei16_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei16_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei16_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei16_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei16_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei16_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei16_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei16_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei16_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei16_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei16_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei16_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei16_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei16_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vloxseg2ei16_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4x2_t test_vloxseg2ei16_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei16_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei16_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei16_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei16_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei16_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei16_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei16_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei16_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei16_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei16_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei16_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei16_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei16_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei16_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei16_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei16_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei16_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei16_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei16_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei16_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei16_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei16_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei16_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei16_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei16_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei16_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei16_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei16_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei16_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei16_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei16_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei16_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei16_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei16_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vloxseg2ei16_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4x2_t test_vloxseg2ei16_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei16_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei16_v_u16mf4x2_tumu(vbool64_t vm, + vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei16_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei16_v_u16mf2x2_tumu(vbool32_t vm, + vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei16_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei16_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei16_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei16_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei16_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei16_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei16_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei16_v_u32mf2x2_tumu(vbool64_t vm, + vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei16_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei16_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei16_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei16_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei16_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei16_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei16_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei16_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei16_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei16_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei16_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei16_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei16_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei16_v_f16mf4x2_mu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei16_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei16_v_f16mf2x2_mu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei16_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei16_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei16_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei16_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei16_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei16_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f16m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei16_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei16_v_f32mf2x2_mu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei16_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei16_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei16_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei16_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei16_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei16_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei16_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei16_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei16_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei16_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei16_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei16_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei16_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei16_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei16_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei16_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei16_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei16_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei16_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei16_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei16_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei16_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vloxseg2ei16_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4x2_t test_vloxseg2ei16_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i8m4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei16_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei16_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei16_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei16_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei16_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei16_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei16_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei16_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei16_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei16_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i16m4x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei16_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei16_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei16_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei16_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei16_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei16_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei16_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei16_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei16_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei16_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei16_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei16_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei16_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei16_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei16_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei16_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei16_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei16_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei16_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei16_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei16_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei16_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei16_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei16_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_u8m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vloxseg2ei16_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4x2_t test_vloxseg2ei16_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_u8m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei16_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei16_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei16_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei16_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei16_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei16_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei16_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei16_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei16_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei16_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u16m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei16_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei16_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei16_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei16_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei16_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei16_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei16_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei16_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei16_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei16_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei16_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei16_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei16_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei16_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c index d0ba2b94f..a589d196d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c @@ -6,738 +6,1102 @@ #include -vfloat16mf4x2_t test_vloxseg2ei32_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei32_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei32_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei32_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei32_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei32_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei32_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei32_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei32_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei32_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m4x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei32_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei32_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei32_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei32_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei32_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei32_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei32_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei32_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei32_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei32_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei32_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei32_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei32_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei32_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei32_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei32_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei32_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei32_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei32_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei32_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei32_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei32_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei32_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei32_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8m2x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei32_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei32_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei32_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei32_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei32_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei32_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei32_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei32_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei32_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei32_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m4x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei32_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei32_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei32_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei32_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei32_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei32_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei32_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei32_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei32_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei32_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei32_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei32_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei32_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei32_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei32_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei32_v_u8mf8x2_tu(vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei32_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei32_v_u8mf4x2_tu(vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei32_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei32_v_u8mf2x2_tu(vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei32_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei32_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei32_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei32_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8m2x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei32_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei32_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei32_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei32_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei32_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei32_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei32_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei32_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei32_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei32_v_u16m4x2_tu(vuint16m4x2_t vd, + const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m4x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei32_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei32_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei32_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei32_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei32_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei32_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei32_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei32_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei32_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei32_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei32_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei32_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei32_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei32_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei32_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei32_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei32_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei32_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei32_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei32_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei32_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei32_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei32_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei32_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei32_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei32_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei32_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei32_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei32_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei32_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei32_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei32_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei32_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei32_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei32_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei32_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei32_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei32_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei32_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei32_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei32_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei32_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei32_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei32_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei32_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei32_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei32_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei32_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei32_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei32_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei32_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei32_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei32_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei32_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei32_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei32_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei32_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei32_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m4x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei32_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei32_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei32_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei32_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei32_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei32_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei32_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei32_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei32_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei32_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei32_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei32_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei32_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei32_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei32_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei32_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei32_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei32_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei32_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei32_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei32_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei32_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei32_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei32_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_u8m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei32_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei32_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei32_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei32_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei32_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei32_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei32_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei32_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei32_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei32_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei32_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei32_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei32_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei32_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei32_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei32_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei32_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei32_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei32_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei32_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei32_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei32_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei32_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei32_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei32_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei32_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei32_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei32_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei32_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei32_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei32_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei32_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei32_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei32_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei32_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei32_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei32_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei32_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei32_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei32_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei32_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei32_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei32_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei32_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei32_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei32_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei32_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei32_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei32_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei32_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei32_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei32_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei32_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei32_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei32_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei32_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei32_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei32_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei32_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei32_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei32_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei32_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei32_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei32_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei32_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei32_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei32_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei32_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei32_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei32_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei32_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei32_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei32_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei32_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei32_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei32_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei32_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei32_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei32_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei32_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei32_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei32_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei32_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei32_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei32_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei32_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei32_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei32_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei32_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei32_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei32_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei32_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei32_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei32_v_u16mf4x2_tumu(vbool64_t vm, + vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei32_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei32_v_u16mf2x2_tumu(vbool32_t vm, + vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei32_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei32_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei32_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei32_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei32_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei32_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei32_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei32_v_u32mf2x2_tumu(vbool64_t vm, + vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei32_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei32_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei32_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei32_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei32_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei32_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei32_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei32_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei32_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei32_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei32_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei32_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei32_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei32_v_f16mf4x2_mu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei32_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei32_v_f16mf2x2_mu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei32_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei32_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei32_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei32_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei32_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei32_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f16m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei32_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei32_v_f32mf2x2_mu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei32_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei32_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei32_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei32_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei32_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei32_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei32_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei32_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei32_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei32_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei32_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei32_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei32_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei32_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei32_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei32_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei32_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei32_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei32_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei32_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei32_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei32_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i8m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei32_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei32_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei32_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei32_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei32_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei32_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei32_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei32_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei32_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei32_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i16m4x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei32_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei32_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei32_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei32_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei32_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei32_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei32_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei32_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei32_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei32_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei32_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei32_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei32_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei32_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei32_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei32_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei32_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei32_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei32_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei32_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei32_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei32_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei32_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei32_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei32_v_u8m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei32_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei32_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei32_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei32_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei32_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei32_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei32_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei32_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei32_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei32_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u16m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei32_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei32_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei32_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei32_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei32_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei32_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei32_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei32_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei32_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei32_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei32_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei32_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei32_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei32_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg2ei32_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c index 960204bfc..7dc64ef86 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c @@ -6,658 +6,985 @@ #include -vfloat16mf4x2_t test_vloxseg2ei64_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei64_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei64_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei64_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei64_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei64_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei64_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei64_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei64_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei64_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei64_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei64_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei64_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei64_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei64_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei64_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei64_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei64_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei64_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei64_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei64_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei64_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei64_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei64_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei64_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei64_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei64_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei64_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei64_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei64_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei64_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei64_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei64_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei64_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei64_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei64_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei64_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei64_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei64_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei64_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei64_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei64_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei64_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei64_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei64_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei64_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei64_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei64_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei64_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei64_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei64_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei64_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei64_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei64_v_u8mf8x2_tu(vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei64_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei64_v_u8mf4x2_tu(vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei64_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei64_v_u8mf2x2_tu(vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei64_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei64_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei64_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei64_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei64_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei64_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei64_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei64_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei64_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei64_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei64_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei64_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei64_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei64_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei64_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei64_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei64_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei64_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei64_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei64_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei64_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei64_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei64_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei64_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei64_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei64_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei64_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei64_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei64_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei64_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei64_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei64_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei64_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei64_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei64_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei64_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei64_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei64_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei64_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei64_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei64_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei64_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei64_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei64_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei64_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei64_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei64_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei64_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei64_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei64_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei64_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei64_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei64_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei64_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei64_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei64_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei64_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei64_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei64_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei64_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei64_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei64_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei64_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei64_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei64_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei64_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei64_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei64_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei64_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei64_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei64_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei64_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei64_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei64_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei64_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei64_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei64_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei64_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei64_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei64_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei64_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei64_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei64_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei64_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei64_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei64_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei64_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei64_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei64_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei64_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei64_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei64_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei64_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei64_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei64_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei64_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei64_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei64_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei64_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei64_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei64_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei64_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei64_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei64_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei64_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei64_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei64_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei64_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei64_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei64_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei64_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei64_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei64_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei64_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei64_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei64_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei64_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei64_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei64_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei64_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei64_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei64_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei64_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei64_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei64_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei64_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei64_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei64_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei64_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei64_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei64_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei64_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei64_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei64_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei64_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei64_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei64_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei64_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei64_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei64_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei64_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei64_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei64_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei64_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei64_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei64_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei64_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei64_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei64_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei64_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei64_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei64_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei64_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei64_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei64_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei64_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei64_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei64_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei64_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei64_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei64_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei64_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei64_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei64_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei64_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei64_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei64_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei64_v_u16mf4x2_tumu(vbool64_t vm, + vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei64_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei64_v_u16mf2x2_tumu(vbool32_t vm, + vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei64_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei64_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei64_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei64_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei64_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei64_v_u32mf2x2_tumu(vbool64_t vm, + vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei64_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei64_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei64_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei64_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei64_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei64_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei64_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei64_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei64_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei64_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei64_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei64_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei64_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei64_v_f16mf4x2_mu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei64_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei64_v_f16mf2x2_mu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei64_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei64_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei64_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei64_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei64_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei64_v_f32mf2x2_mu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei64_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei64_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei64_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei64_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei64_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei64_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei64_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei64_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei64_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei64_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei64_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei64_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei64_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei64_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei64_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei64_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei64_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei64_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei64_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei64_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei64_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei64_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei64_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei64_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei64_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei64_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei64_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei64_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei64_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei64_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei64_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei64_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei64_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei64_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei64_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei64_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei64_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei64_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei64_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei64_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei64_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei64_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei64_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei64_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei64_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei64_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei64_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei64_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei64_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei64_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg2ei64_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei64_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei64_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei64_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei64_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei64_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei64_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei64_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei64_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei64_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei64_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei64_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei64_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei64_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei64_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei64_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei64_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei64_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei64_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei64_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei64_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei64_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei64_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg2ei64_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c index c1924658b..f1b570df5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c @@ -6,770 +6,1142 @@ #include -vfloat16mf4x2_t test_vloxseg2ei8_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei8_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei8_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei8_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei8_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei8_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei8_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei8_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei8_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei8_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m4x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei8_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei8_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei8_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei8_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei8_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei8_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei8_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei8_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei8_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei8_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei8_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei8_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei8_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei8_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei8_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei8_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei8_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei8_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei8_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei8_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei8_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei8_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei8_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei8_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i8m2x2_tu(vd, rs1, rs2, vl); } -vint8m4x2_t test_vloxseg2ei8_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4x2_t test_vloxseg2ei8_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i8m4x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei8_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei8_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei8_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei8_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei8_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei8_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei8_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei8_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei8_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei8_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16m4x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei8_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei8_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei8_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei8_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei8_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei8_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei8_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei8_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei8_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei8_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei8_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei8_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei8_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei8_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei8_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei8_v_u8mf8x2_tu(vuint8mf8x2_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei8_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei8_v_u8mf4x2_tu(vuint8mf4x2_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei8_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei8_v_u8mf2x2_tu(vuint8mf2x2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei8_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei8_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei8_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei8_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8m2x2_tu(vd, rs1, rs2, vl); } -vuint8m4x2_t test_vloxseg2ei8_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4x2_t test_vloxseg2ei8_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8m4x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei8_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei8_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei8_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei8_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei8_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei8_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei8_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei8_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei8_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei8_v_u16m4x2_tu(vuint16m4x2_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u16m4x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei8_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei8_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei8_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei8_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei8_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei8_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei8_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei8_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei8_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei8_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei8_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei8_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei8_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei8_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei8_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei8_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei8_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei8_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei8_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei8_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei8_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei8_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei8_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei8_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei8_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei8_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei8_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei8_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei8_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei8_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei8_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei8_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei8_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei8_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei8_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei8_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei8_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei8_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei8_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei8_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei8_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei8_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei8_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei8_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei8_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei8_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei8_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei8_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vloxseg2ei8_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4x2_t test_vloxseg2ei8_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei8_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei8_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei8_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei8_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei8_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei8_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei8_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei8_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei8_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei8_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i16m4x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei8_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei8_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei8_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei8_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei8_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei8_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei8_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei8_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei8_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei8_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei8_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei8_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei8_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei8_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei8_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei8_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei8_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei8_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei8_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei8_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei8_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei8_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei8_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei8_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vloxseg2ei8_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4x2_t test_vloxseg2ei8_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei8_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei8_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei8_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei8_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei8_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei8_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei8_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei8_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei8_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei8_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei8_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei8_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei8_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei8_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei8_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei8_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei8_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei8_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei8_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei8_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei8_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei8_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei8_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei8_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei8_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei8_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei8_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei8_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei8_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei8_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei8_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei8_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei8_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei8_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei8_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei8_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei8_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei8_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei8_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei8_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei8_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei8_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei8_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei8_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei8_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei8_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei8_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei8_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei8_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei8_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei8_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei8_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei8_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei8_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei8_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei8_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei8_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei8_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vloxseg2ei8_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4x2_t test_vloxseg2ei8_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei8_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei8_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei8_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei8_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei8_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei8_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei8_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei8_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei8_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei8_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei8_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei8_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei8_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei8_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei8_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei8_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei8_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei8_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei8_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei8_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei8_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei8_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei8_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei8_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei8_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei8_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei8_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei8_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei8_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei8_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei8_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei8_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei8_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei8_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vloxseg2ei8_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4x2_t test_vloxseg2ei8_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei8_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei8_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei8_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei8_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei8_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei8_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei8_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei8_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei8_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei8_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei8_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei8_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei8_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei8_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei8_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei8_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei8_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei8_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei8_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei8_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei8_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei8_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei8_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei8_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vloxseg2ei8_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x2_t test_vloxseg2ei8_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vloxseg2ei8_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x2_t test_vloxseg2ei8_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vloxseg2ei8_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x2_t test_vloxseg2ei8_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vloxseg2ei8_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x2_t test_vloxseg2ei8_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vloxseg2ei8_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4x2_t test_vloxseg2ei8_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f16m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vloxseg2ei8_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x2_t test_vloxseg2ei8_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vloxseg2ei8_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x2_t test_vloxseg2ei8_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vloxseg2ei8_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x2_t test_vloxseg2ei8_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vloxseg2ei8_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4x2_t test_vloxseg2ei8_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vloxseg2ei8_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x2_t test_vloxseg2ei8_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vloxseg2ei8_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x2_t test_vloxseg2ei8_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vloxseg2ei8_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4x2_t test_vloxseg2ei8_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vloxseg2ei8_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x2_t test_vloxseg2ei8_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vloxseg2ei8_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x2_t test_vloxseg2ei8_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vloxseg2ei8_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x2_t test_vloxseg2ei8_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vloxseg2ei8_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x2_t test_vloxseg2ei8_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vloxseg2ei8_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x2_t test_vloxseg2ei8_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vloxseg2ei8_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4x2_t test_vloxseg2ei8_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i8m4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vloxseg2ei8_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x2_t test_vloxseg2ei8_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vloxseg2ei8_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x2_t test_vloxseg2ei8_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vloxseg2ei8_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x2_t test_vloxseg2ei8_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vloxseg2ei8_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x2_t test_vloxseg2ei8_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vloxseg2ei8_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4x2_t test_vloxseg2ei8_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i16m4x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vloxseg2ei8_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x2_t test_vloxseg2ei8_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vloxseg2ei8_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x2_t test_vloxseg2ei8_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vloxseg2ei8_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x2_t test_vloxseg2ei8_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vloxseg2ei8_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4x2_t test_vloxseg2ei8_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vloxseg2ei8_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x2_t test_vloxseg2ei8_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vloxseg2ei8_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x2_t test_vloxseg2ei8_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vloxseg2ei8_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4x2_t test_vloxseg2ei8_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vloxseg2ei8_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x2_t test_vloxseg2ei8_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vloxseg2ei8_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x2_t test_vloxseg2ei8_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vloxseg2ei8_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x2_t test_vloxseg2ei8_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vloxseg2ei8_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x2_t test_vloxseg2ei8_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vloxseg2ei8_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x2_t test_vloxseg2ei8_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vloxseg2ei8_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4x2_t test_vloxseg2ei8_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u8m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vloxseg2ei8_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x2_t test_vloxseg2ei8_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vloxseg2ei8_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x2_t test_vloxseg2ei8_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vloxseg2ei8_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x2_t test_vloxseg2ei8_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vloxseg2ei8_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x2_t test_vloxseg2ei8_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vloxseg2ei8_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4x2_t test_vloxseg2ei8_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u16m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vloxseg2ei8_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x2_t test_vloxseg2ei8_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vloxseg2ei8_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x2_t test_vloxseg2ei8_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vloxseg2ei8_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x2_t test_vloxseg2ei8_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vloxseg2ei8_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4x2_t test_vloxseg2ei8_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg2ei8_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vloxseg2ei8_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x2_t test_vloxseg2ei8_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vloxseg2ei8_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x2_t test_vloxseg2ei8_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vloxseg2ei8_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4x2_t test_vloxseg2ei8_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei8_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c index 737129ae5..65a3e75b9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c @@ -6,594 +6,889 @@ #include -vfloat16mf4x3_t test_vloxseg3ei16_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei16_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei16_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei16_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei16_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei16_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei16_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei16_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei16_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei16_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei16_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei16_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei16_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei16_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei16_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei16_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei16_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei16_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei16_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei16_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei16_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei16_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei16_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei16_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei16_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei16_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei16_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei16_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8m2x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei16_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei16_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei16_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei16_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei16_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei16_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei16_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei16_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei16_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei16_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei16_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei16_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei16_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei16_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei16_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei16_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei16_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei16_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei16_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei16_v_u8mf8x3_tu(vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei16_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei16_v_u8mf4x3_tu(vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei16_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei16_v_u8mf2x3_tu(vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei16_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei16_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei16_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei16_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8m2x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei16_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei16_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei16_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei16_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei16_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei16_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei16_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei16_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei16_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei16_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei16_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei16_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei16_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei16_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei16_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei16_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei16_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei16_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei16_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei16_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei16_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei16_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei16_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei16_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei16_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei16_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei16_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei16_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei16_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei16_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei16_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei16_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei16_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei16_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei16_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei16_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei16_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei16_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei16_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei16_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei16_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei16_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei16_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei16_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei16_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei16_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8m2x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei16_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei16_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei16_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei16_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei16_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei16_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei16_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei16_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei16_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei16_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei16_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei16_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei16_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei16_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei16_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei16_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei16_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei16_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei16_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei16_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei16_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei16_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei16_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei16_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei16_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei16_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei16_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei16_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_u8m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei16_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei16_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei16_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei16_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei16_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei16_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei16_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei16_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei16_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei16_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei16_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei16_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei16_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei16_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei16_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei16_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei16_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei16_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei16_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei16_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei16_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei16_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei16_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei16_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei16_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei16_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei16_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei16_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei16_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei16_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei16_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei16_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei16_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei16_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei16_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei16_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei16_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei16_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei16_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei16_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei16_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei16_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei16_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei16_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei16_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei16_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei16_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei16_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei16_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei16_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei16_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei16_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei16_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei16_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei16_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei16_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei16_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei16_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei16_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei16_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei16_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei16_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei16_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei16_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei16_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei16_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei16_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei16_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei16_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei16_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei16_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei16_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei16_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei16_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei16_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei16_v_u16mf4x3_tumu(vbool64_t vm, + vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei16_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei16_v_u16mf2x3_tumu(vbool32_t vm, + vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei16_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei16_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei16_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei16_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei16_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei16_v_u32mf2x3_tumu(vbool64_t vm, + vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei16_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei16_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei16_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei16_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei16_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei16_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei16_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei16_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei16_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei16_v_f16mf4x3_mu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei16_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei16_v_f16mf2x3_mu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei16_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei16_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei16_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei16_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei16_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei16_v_f32mf2x3_mu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei16_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei16_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei16_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei16_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei16_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei16_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei16_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei16_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei16_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei16_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei16_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei16_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei16_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei16_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei16_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei16_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei16_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei16_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i8m2x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei16_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei16_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei16_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei16_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei16_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei16_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei16_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei16_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei16_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei16_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei16_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei16_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei16_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei16_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei16_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei16_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei16_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei16_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei16_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei16_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei16_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei16_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei16_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei16_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei16_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei16_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei16_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei16_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_u8m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei16_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei16_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei16_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei16_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei16_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei16_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei16_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei16_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei16_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei16_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei16_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei16_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei16_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei16_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei16_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei16_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei16_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei16_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c index 9d4cf8db8..a5a89b2a4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c @@ -6,594 +6,889 @@ #include -vfloat16mf4x3_t test_vloxseg3ei32_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei32_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei32_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei32_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei32_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei32_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei32_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei32_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei32_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei32_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei32_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei32_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei32_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei32_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei32_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei32_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei32_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei32_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei32_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei32_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei32_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei32_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei32_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei32_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei32_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei32_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei32_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei32_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8m2x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei32_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei32_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei32_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei32_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei32_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei32_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei32_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei32_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei32_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei32_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei32_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei32_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei32_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei32_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei32_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei32_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei32_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei32_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei32_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei32_v_u8mf8x3_tu(vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei32_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei32_v_u8mf4x3_tu(vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei32_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei32_v_u8mf2x3_tu(vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei32_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei32_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei32_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei32_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8m2x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei32_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei32_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei32_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei32_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei32_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei32_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei32_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei32_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei32_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei32_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei32_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei32_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei32_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei32_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei32_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei32_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei32_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei32_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei32_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei32_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei32_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei32_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei32_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei32_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei32_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei32_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei32_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei32_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei32_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei32_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei32_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei32_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei32_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei32_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei32_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei32_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei32_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei32_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei32_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei32_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei32_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei32_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei32_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei32_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei32_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei32_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8m2x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei32_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei32_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei32_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei32_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei32_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei32_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei32_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei32_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei32_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei32_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei32_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei32_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei32_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei32_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei32_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei32_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei32_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei32_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei32_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei32_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei32_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei32_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei32_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei32_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei32_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei32_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei32_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei32_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_u8m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei32_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei32_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei32_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei32_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei32_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei32_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei32_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei32_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei32_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei32_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei32_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei32_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei32_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei32_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei32_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei32_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei32_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei32_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei32_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei32_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei32_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei32_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei32_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei32_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei32_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei32_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei32_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei32_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei32_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei32_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei32_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei32_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei32_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei32_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei32_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei32_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei32_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei32_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei32_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei32_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei32_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei32_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei32_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei32_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei32_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei32_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei32_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei32_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei32_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei32_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei32_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei32_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei32_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei32_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei32_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei32_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei32_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei32_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei32_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei32_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei32_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei32_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei32_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei32_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei32_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei32_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei32_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei32_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei32_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei32_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei32_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei32_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei32_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei32_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei32_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei32_v_u16mf4x3_tumu(vbool64_t vm, + vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei32_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei32_v_u16mf2x3_tumu(vbool32_t vm, + vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei32_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei32_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei32_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei32_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei32_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei32_v_u32mf2x3_tumu(vbool64_t vm, + vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei32_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei32_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei32_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei32_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei32_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei32_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei32_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei32_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei32_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei32_v_f16mf4x3_mu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei32_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei32_v_f16mf2x3_mu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei32_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei32_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei32_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei32_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei32_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei32_v_f32mf2x3_mu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei32_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei32_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei32_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei32_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei32_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei32_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei32_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei32_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei32_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei32_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei32_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei32_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei32_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei32_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei32_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei32_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei32_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei32_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i8m2x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei32_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei32_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei32_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei32_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei32_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei32_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei32_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei32_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei32_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei32_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei32_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei32_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei32_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei32_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei32_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei32_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei32_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei32_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei32_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei32_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei32_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei32_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei32_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei32_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei32_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei32_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei32_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei32_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei32_v_u8m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei32_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei32_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei32_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei32_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei32_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei32_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei32_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei32_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei32_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei32_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei32_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei32_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei32_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei32_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei32_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei32_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei32_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei32_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg3ei32_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c index 5bcadc789..914d57f2f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c @@ -6,562 +6,843 @@ #include -vfloat16mf4x3_t test_vloxseg3ei64_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei64_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei64_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei64_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei64_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei64_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei64_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei64_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei64_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei64_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei64_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei64_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei64_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei64_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei64_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei64_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei64_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei64_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei64_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei64_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei64_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei64_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei64_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei64_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei64_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei64_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei64_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei64_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei64_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei64_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei64_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei64_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei64_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei64_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei64_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei64_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei64_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei64_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei64_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei64_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei64_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei64_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei64_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei64_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei64_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei64_v_u8mf8x3_tu(vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei64_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei64_v_u8mf4x3_tu(vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei64_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei64_v_u8mf2x3_tu(vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei64_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei64_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei64_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei64_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei64_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei64_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei64_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei64_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei64_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei64_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei64_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei64_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei64_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei64_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei64_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei64_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei64_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei64_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei64_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei64_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei64_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei64_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei64_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei64_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei64_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei64_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei64_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei64_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei64_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei64_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei64_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei64_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei64_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei64_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei64_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei64_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei64_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei64_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei64_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei64_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei64_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei64_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei64_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei64_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei64_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei64_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei64_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei64_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei64_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei64_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei64_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei64_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei64_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei64_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei64_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei64_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei64_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei64_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei64_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei64_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei64_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei64_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei64_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei64_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei64_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei64_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei64_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei64_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei64_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei64_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei64_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei64_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei64_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei64_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei64_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei64_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei64_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei64_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei64_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei64_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei64_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei64_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei64_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei64_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei64_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei64_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei64_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei64_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei64_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei64_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei64_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei64_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei64_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei64_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei64_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei64_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei64_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei64_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei64_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei64_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei64_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei64_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei64_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei64_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei64_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei64_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei64_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei64_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei64_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei64_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei64_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei64_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei64_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei64_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei64_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei64_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei64_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei64_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei64_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei64_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei64_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei64_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei64_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei64_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei64_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei64_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei64_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei64_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei64_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei64_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei64_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei64_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei64_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei64_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei64_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei64_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei64_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei64_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei64_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei64_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei64_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei64_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei64_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei64_v_u16mf4x3_tumu(vbool64_t vm, + vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei64_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei64_v_u16mf2x3_tumu(vbool32_t vm, + vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei64_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei64_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei64_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei64_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei64_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei64_v_u32mf2x3_tumu(vbool64_t vm, + vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei64_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei64_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei64_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei64_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei64_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei64_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei64_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei64_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei64_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei64_v_f16mf4x3_mu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei64_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei64_v_f16mf2x3_mu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei64_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei64_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei64_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei64_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei64_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei64_v_f32mf2x3_mu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei64_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei64_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei64_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei64_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei64_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei64_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei64_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei64_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei64_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei64_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei64_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei64_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei64_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei64_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei64_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei64_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei64_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei64_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei64_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei64_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei64_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei64_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei64_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei64_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei64_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei64_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei64_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei64_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei64_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei64_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei64_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei64_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei64_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei64_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei64_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei64_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei64_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei64_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei64_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei64_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei64_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei64_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg3ei64_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei64_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei64_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei64_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei64_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei64_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei64_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei64_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei64_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei64_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei64_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei64_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei64_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei64_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei64_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei64_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei64_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei64_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei64_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg3ei64_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c index fa35c4a65..a5cf8f583 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c @@ -6,594 +6,883 @@ #include -vfloat16mf4x3_t test_vloxseg3ei8_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei8_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei8_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei8_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei8_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei8_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei8_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei8_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei8_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei8_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei8_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei8_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei8_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei8_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei8_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei8_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei8_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei8_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei8_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei8_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei8_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei8_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei8_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei8_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei8_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei8_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei8_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei8_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i8m2x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei8_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei8_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei8_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei8_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei8_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei8_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei8_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei8_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei8_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei8_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei8_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei8_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei8_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei8_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei8_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei8_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei8_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei8_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei8_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei8_v_u8mf8x3_tu(vuint8mf8x3_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei8_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei8_v_u8mf4x3_tu(vuint8mf4x3_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei8_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei8_v_u8mf2x3_tu(vuint8mf2x3_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei8_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei8_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei8_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei8_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u8m2x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei8_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei8_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei8_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei8_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei8_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei8_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei8_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei8_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei8_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei8_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei8_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei8_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei8_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei8_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei8_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei8_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei8_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei8_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei8_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei8_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei8_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei8_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei8_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei8_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei8_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei8_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei8_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei8_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei8_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei8_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei8_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei8_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei8_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei8_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei8_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei8_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei8_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei8_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei8_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei8_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei8_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei8_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei8_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei8_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei8_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei8_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8m2x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei8_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei8_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei8_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei8_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei8_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei8_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei8_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei8_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei8_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei8_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei8_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei8_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei8_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei8_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei8_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei8_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei8_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei8_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei8_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei8_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei8_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei8_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei8_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei8_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei8_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei8_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei8_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei8_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei8_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei8_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei8_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei8_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei8_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei8_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei8_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei8_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei8_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei8_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei8_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei8_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei8_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei8_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei8_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei8_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei8_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei8_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei8_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei8_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei8_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei8_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei8_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei8_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei8_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei8_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei8_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei8_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei8_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei8_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei8_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei8_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei8_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei8_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei8_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei8_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei8_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei8_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei8_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei8_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei8_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei8_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei8_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei8_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei8_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei8_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei8_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei8_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei8_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei8_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei8_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei8_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei8_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei8_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei8_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei8_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei8_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei8_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei8_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei8_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei8_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei8_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei8_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei8_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei8_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei8_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei8_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei8_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei8_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei8_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei8_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei8_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei8_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei8_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei8_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei8_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei8_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei8_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei8_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei8_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei8_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei8_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei8_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei8_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei8_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei8_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei8_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei8_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei8_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei8_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei8_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei8_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vloxseg3ei8_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x3_t test_vloxseg3ei8_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vloxseg3ei8_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x3_t test_vloxseg3ei8_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vloxseg3ei8_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x3_t test_vloxseg3ei8_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vloxseg3ei8_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x3_t test_vloxseg3ei8_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vloxseg3ei8_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x3_t test_vloxseg3ei8_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vloxseg3ei8_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x3_t test_vloxseg3ei8_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vloxseg3ei8_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x3_t test_vloxseg3ei8_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vloxseg3ei8_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x3_t test_vloxseg3ei8_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vloxseg3ei8_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x3_t test_vloxseg3ei8_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vloxseg3ei8_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x3_t test_vloxseg3ei8_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vloxseg3ei8_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x3_t test_vloxseg3ei8_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vloxseg3ei8_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x3_t test_vloxseg3ei8_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vloxseg3ei8_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x3_t test_vloxseg3ei8_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vloxseg3ei8_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x3_t test_vloxseg3ei8_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i8m2x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vloxseg3ei8_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x3_t test_vloxseg3ei8_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vloxseg3ei8_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x3_t test_vloxseg3ei8_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vloxseg3ei8_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x3_t test_vloxseg3ei8_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vloxseg3ei8_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x3_t test_vloxseg3ei8_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vloxseg3ei8_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x3_t test_vloxseg3ei8_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vloxseg3ei8_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x3_t test_vloxseg3ei8_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vloxseg3ei8_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x3_t test_vloxseg3ei8_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vloxseg3ei8_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x3_t test_vloxseg3ei8_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vloxseg3ei8_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x3_t test_vloxseg3ei8_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vloxseg3ei8_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x3_t test_vloxseg3ei8_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vloxseg3ei8_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x3_t test_vloxseg3ei8_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vloxseg3ei8_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x3_t test_vloxseg3ei8_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vloxseg3ei8_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x3_t test_vloxseg3ei8_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vloxseg3ei8_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x3_t test_vloxseg3ei8_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u8m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vloxseg3ei8_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x3_t test_vloxseg3ei8_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vloxseg3ei8_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x3_t test_vloxseg3ei8_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vloxseg3ei8_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x3_t test_vloxseg3ei8_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vloxseg3ei8_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x3_t test_vloxseg3ei8_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg3ei8_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vloxseg3ei8_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x3_t test_vloxseg3ei8_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vloxseg3ei8_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x3_t test_vloxseg3ei8_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vloxseg3ei8_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x3_t test_vloxseg3ei8_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vloxseg3ei8_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x3_t test_vloxseg3ei8_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vloxseg3ei8_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x3_t test_vloxseg3ei8_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei8_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c index c5d3cb641..8ec8bd259 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c @@ -6,594 +6,889 @@ #include -vfloat16mf4x4_t test_vloxseg4ei16_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei16_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei16_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei16_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei16_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei16_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei16_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei16_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei16_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei16_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei16_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei16_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei16_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei16_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei16_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei16_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei16_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei16_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei16_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei16_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei16_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei16_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei16_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei16_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei16_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei16_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei16_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei16_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8m2x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei16_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei16_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei16_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei16_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei16_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei16_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei16_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei16_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei16_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei16_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei16_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei16_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei16_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei16_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei16_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei16_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei16_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei16_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei16_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei16_v_u8mf8x4_tu(vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei16_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei16_v_u8mf4x4_tu(vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei16_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei16_v_u8mf2x4_tu(vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei16_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei16_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei16_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei16_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8m2x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei16_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei16_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei16_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei16_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei16_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei16_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei16_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei16_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei16_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei16_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei16_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei16_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei16_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei16_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei16_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei16_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei16_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei16_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei16_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei16_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei16_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei16_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei16_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei16_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei16_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei16_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei16_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei16_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei16_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei16_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei16_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei16_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei16_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei16_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei16_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei16_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei16_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei16_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei16_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei16_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei16_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei16_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei16_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei16_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei16_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei16_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8m2x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei16_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei16_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei16_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei16_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei16_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei16_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei16_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei16_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei16_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei16_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei16_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei16_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei16_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei16_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei16_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei16_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei16_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei16_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei16_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei16_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei16_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei16_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei16_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei16_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei16_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei16_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei16_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei16_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_u8m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei16_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei16_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei16_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei16_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei16_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei16_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei16_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei16_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei16_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei16_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei16_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei16_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei16_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei16_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei16_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei16_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei16_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei16_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei16_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei16_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei16_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei16_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei16_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei16_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei16_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei16_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei16_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei16_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei16_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei16_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei16_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei16_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei16_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei16_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei16_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei16_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei16_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei16_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei16_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei16_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei16_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei16_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei16_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei16_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei16_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei16_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei16_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei16_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei16_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei16_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei16_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei16_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei16_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei16_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei16_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei16_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei16_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei16_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei16_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei16_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei16_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei16_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei16_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei16_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei16_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei16_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei16_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei16_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei16_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei16_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei16_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei16_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei16_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei16_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei16_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei16_v_u16mf4x4_tumu(vbool64_t vm, + vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei16_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei16_v_u16mf2x4_tumu(vbool32_t vm, + vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei16_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei16_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei16_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei16_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei16_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei16_v_u32mf2x4_tumu(vbool64_t vm, + vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei16_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei16_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei16_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei16_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei16_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei16_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei16_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei16_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei16_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei16_v_f16mf4x4_mu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei16_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei16_v_f16mf2x4_mu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei16_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei16_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei16_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei16_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei16_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei16_v_f32mf2x4_mu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei16_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei16_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei16_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei16_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei16_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei16_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei16_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei16_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei16_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei16_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei16_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei16_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei16_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei16_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei16_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei16_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei16_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei16_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i8m2x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei16_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei16_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei16_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei16_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei16_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei16_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei16_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei16_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei16_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei16_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei16_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei16_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei16_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei16_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei16_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei16_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei16_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei16_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei16_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei16_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei16_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei16_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei16_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei16_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei16_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei16_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei16_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei16_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_u8m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei16_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei16_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei16_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei16_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei16_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei16_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei16_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei16_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei16_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei16_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei16_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei16_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei16_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei16_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei16_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei16_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei16_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei16_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c index b8b35d830..bf1b4cc94 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c @@ -6,594 +6,889 @@ #include -vfloat16mf4x4_t test_vloxseg4ei32_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei32_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei32_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei32_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei32_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei32_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei32_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei32_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei32_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei32_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei32_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei32_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei32_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei32_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei32_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei32_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei32_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei32_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei32_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei32_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei32_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei32_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei32_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei32_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei32_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei32_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei32_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei32_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8m2x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei32_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei32_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei32_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei32_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei32_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei32_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei32_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei32_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei32_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei32_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei32_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei32_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei32_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei32_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei32_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei32_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei32_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei32_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei32_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei32_v_u8mf8x4_tu(vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei32_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei32_v_u8mf4x4_tu(vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei32_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei32_v_u8mf2x4_tu(vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei32_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei32_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei32_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei32_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8m2x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei32_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei32_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei32_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei32_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei32_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei32_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei32_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei32_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei32_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei32_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei32_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei32_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei32_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei32_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei32_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei32_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei32_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei32_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei32_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei32_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei32_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei32_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei32_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei32_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei32_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei32_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei32_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei32_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei32_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei32_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei32_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei32_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei32_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei32_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei32_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei32_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei32_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei32_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei32_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei32_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei32_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei32_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei32_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei32_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei32_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei32_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8m2x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei32_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei32_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei32_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei32_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei32_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei32_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei32_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei32_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei32_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei32_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei32_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei32_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei32_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei32_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei32_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei32_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei32_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei32_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei32_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei32_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei32_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei32_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei32_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei32_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei32_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei32_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei32_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei32_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_u8m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei32_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei32_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei32_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei32_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei32_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei32_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei32_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei32_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei32_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei32_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei32_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei32_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei32_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei32_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei32_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei32_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei32_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei32_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei32_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei32_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei32_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei32_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei32_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei32_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei32_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei32_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei32_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei32_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei32_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei32_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei32_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei32_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei32_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei32_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei32_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei32_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei32_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei32_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei32_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei32_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei32_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei32_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei32_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei32_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei32_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei32_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei32_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei32_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei32_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei32_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei32_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei32_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei32_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei32_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei32_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei32_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei32_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei32_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei32_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei32_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei32_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei32_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei32_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei32_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei32_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei32_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei32_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei32_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei32_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei32_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei32_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei32_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei32_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei32_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei32_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei32_v_u16mf4x4_tumu(vbool64_t vm, + vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei32_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei32_v_u16mf2x4_tumu(vbool32_t vm, + vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei32_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei32_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei32_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei32_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei32_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei32_v_u32mf2x4_tumu(vbool64_t vm, + vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei32_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei32_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei32_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei32_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei32_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei32_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei32_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei32_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei32_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei32_v_f16mf4x4_mu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei32_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei32_v_f16mf2x4_mu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei32_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei32_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei32_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei32_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei32_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei32_v_f32mf2x4_mu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei32_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei32_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei32_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei32_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei32_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei32_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei32_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei32_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei32_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei32_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei32_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei32_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei32_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei32_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei32_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei32_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei32_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei32_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i8m2x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei32_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei32_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei32_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei32_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei32_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei32_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei32_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei32_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei32_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei32_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei32_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei32_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei32_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei32_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei32_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei32_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei32_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei32_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei32_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei32_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei32_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei32_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei32_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei32_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei32_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei32_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei32_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei32_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei32_v_u8m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei32_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei32_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei32_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei32_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei32_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei32_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei32_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei32_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei32_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei32_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei32_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei32_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei32_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei32_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei32_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei32_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei32_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei32_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg4ei32_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c index d7a56546a..323f4aed5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c @@ -6,562 +6,843 @@ #include -vfloat16mf4x4_t test_vloxseg4ei64_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei64_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei64_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei64_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei64_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei64_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei64_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei64_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei64_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei64_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei64_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei64_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei64_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei64_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei64_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei64_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei64_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei64_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei64_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei64_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei64_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei64_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei64_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei64_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei64_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei64_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei64_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei64_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei64_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei64_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei64_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei64_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei64_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei64_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei64_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei64_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei64_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei64_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei64_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei64_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei64_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei64_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei64_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei64_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei64_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei64_v_u8mf8x4_tu(vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei64_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei64_v_u8mf4x4_tu(vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei64_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei64_v_u8mf2x4_tu(vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei64_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei64_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei64_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei64_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei64_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei64_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei64_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei64_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei64_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei64_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei64_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei64_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei64_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei64_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei64_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei64_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei64_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei64_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei64_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei64_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei64_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei64_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei64_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei64_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei64_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei64_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei64_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei64_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei64_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei64_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei64_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei64_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei64_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei64_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei64_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei64_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei64_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei64_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei64_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei64_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei64_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei64_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei64_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei64_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei64_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei64_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei64_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei64_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei64_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei64_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei64_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei64_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei64_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei64_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei64_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei64_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei64_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei64_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei64_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei64_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei64_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei64_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei64_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei64_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei64_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei64_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei64_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei64_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei64_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei64_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei64_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei64_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei64_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei64_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei64_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei64_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei64_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei64_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei64_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei64_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei64_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei64_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei64_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei64_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei64_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei64_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei64_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei64_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei64_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei64_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei64_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei64_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei64_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei64_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei64_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei64_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei64_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei64_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei64_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei64_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei64_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei64_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei64_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei64_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei64_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei64_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei64_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei64_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei64_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei64_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei64_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei64_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei64_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei64_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei64_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei64_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei64_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei64_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei64_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei64_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei64_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei64_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei64_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei64_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei64_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei64_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei64_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei64_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei64_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei64_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei64_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei64_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei64_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei64_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei64_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei64_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei64_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei64_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei64_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei64_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei64_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei64_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei64_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei64_v_u16mf4x4_tumu(vbool64_t vm, + vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei64_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei64_v_u16mf2x4_tumu(vbool32_t vm, + vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei64_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei64_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei64_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei64_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei64_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei64_v_u32mf2x4_tumu(vbool64_t vm, + vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei64_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei64_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei64_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei64_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei64_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei64_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei64_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei64_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei64_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei64_v_f16mf4x4_mu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei64_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei64_v_f16mf2x4_mu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei64_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei64_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei64_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei64_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei64_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei64_v_f32mf2x4_mu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei64_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei64_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei64_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei64_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei64_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei64_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei64_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei64_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei64_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei64_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei64_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei64_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei64_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei64_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei64_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei64_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei64_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei64_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei64_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei64_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei64_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei64_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei64_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei64_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei64_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei64_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei64_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei64_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei64_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei64_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei64_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei64_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei64_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei64_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei64_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei64_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei64_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei64_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei64_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei64_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei64_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei64_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg4ei64_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei64_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei64_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei64_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei64_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei64_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei64_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei64_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei64_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei64_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei64_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei64_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei64_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei64_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei64_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei64_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei64_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei64_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei64_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg4ei64_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c index 2d9ab3702..aa0132a1e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c @@ -6,594 +6,883 @@ #include -vfloat16mf4x4_t test_vloxseg4ei8_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei8_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei8_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei8_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei8_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei8_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei8_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei8_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei8_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei8_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei8_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei8_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei8_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei8_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei8_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei8_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei8_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei8_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei8_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei8_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei8_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei8_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei8_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei8_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei8_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei8_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei8_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei8_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i8m2x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei8_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei8_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei8_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei8_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei8_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei8_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei8_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei8_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei8_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei8_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei8_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei8_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei8_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei8_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei8_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei8_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei8_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei8_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei8_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei8_v_u8mf8x4_tu(vuint8mf8x4_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei8_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei8_v_u8mf4x4_tu(vuint8mf4x4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei8_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei8_v_u8mf2x4_tu(vuint8mf2x4_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei8_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei8_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei8_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei8_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u8m2x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei8_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei8_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei8_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei8_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei8_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei8_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei8_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei8_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei8_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei8_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei8_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei8_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei8_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei8_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei8_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei8_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei8_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei8_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei8_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei8_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei8_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei8_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei8_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei8_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei8_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei8_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei8_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei8_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei8_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei8_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei8_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei8_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei8_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei8_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei8_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei8_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei8_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei8_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei8_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei8_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei8_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei8_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei8_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei8_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei8_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei8_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8m2x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei8_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei8_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei8_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei8_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei8_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei8_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei8_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei8_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei8_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei8_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei8_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei8_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei8_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei8_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei8_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei8_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei8_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei8_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei8_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei8_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei8_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei8_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei8_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei8_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei8_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei8_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei8_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei8_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei8_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei8_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei8_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei8_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei8_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei8_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei8_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei8_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei8_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei8_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei8_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei8_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei8_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei8_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei8_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei8_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei8_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei8_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei8_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei8_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei8_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei8_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei8_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei8_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei8_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei8_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei8_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei8_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei8_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei8_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei8_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei8_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei8_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei8_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei8_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei8_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei8_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei8_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei8_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei8_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei8_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei8_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei8_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei8_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei8_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei8_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei8_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei8_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei8_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei8_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei8_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei8_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei8_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei8_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei8_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei8_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei8_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei8_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei8_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei8_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei8_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei8_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei8_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei8_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei8_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei8_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei8_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei8_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei8_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei8_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei8_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei8_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei8_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei8_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei8_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei8_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei8_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei8_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei8_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei8_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei8_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei8_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei8_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei8_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei8_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei8_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei8_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei8_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei8_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei8_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei8_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei8_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vloxseg4ei8_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x4_t test_vloxseg4ei8_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vloxseg4ei8_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x4_t test_vloxseg4ei8_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vloxseg4ei8_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x4_t test_vloxseg4ei8_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vloxseg4ei8_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x4_t test_vloxseg4ei8_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vloxseg4ei8_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x4_t test_vloxseg4ei8_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vloxseg4ei8_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x4_t test_vloxseg4ei8_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vloxseg4ei8_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x4_t test_vloxseg4ei8_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vloxseg4ei8_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x4_t test_vloxseg4ei8_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vloxseg4ei8_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x4_t test_vloxseg4ei8_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vloxseg4ei8_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x4_t test_vloxseg4ei8_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vloxseg4ei8_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x4_t test_vloxseg4ei8_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vloxseg4ei8_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x4_t test_vloxseg4ei8_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vloxseg4ei8_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x4_t test_vloxseg4ei8_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vloxseg4ei8_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x4_t test_vloxseg4ei8_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i8m2x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vloxseg4ei8_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x4_t test_vloxseg4ei8_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vloxseg4ei8_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x4_t test_vloxseg4ei8_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vloxseg4ei8_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x4_t test_vloxseg4ei8_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vloxseg4ei8_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x4_t test_vloxseg4ei8_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vloxseg4ei8_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x4_t test_vloxseg4ei8_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vloxseg4ei8_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x4_t test_vloxseg4ei8_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vloxseg4ei8_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x4_t test_vloxseg4ei8_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vloxseg4ei8_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x4_t test_vloxseg4ei8_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vloxseg4ei8_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x4_t test_vloxseg4ei8_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vloxseg4ei8_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x4_t test_vloxseg4ei8_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vloxseg4ei8_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x4_t test_vloxseg4ei8_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vloxseg4ei8_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x4_t test_vloxseg4ei8_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vloxseg4ei8_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x4_t test_vloxseg4ei8_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vloxseg4ei8_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x4_t test_vloxseg4ei8_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u8m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vloxseg4ei8_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x4_t test_vloxseg4ei8_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vloxseg4ei8_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x4_t test_vloxseg4ei8_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vloxseg4ei8_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x4_t test_vloxseg4ei8_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vloxseg4ei8_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x4_t test_vloxseg4ei8_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg4ei8_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vloxseg4ei8_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x4_t test_vloxseg4ei8_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vloxseg4ei8_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x4_t test_vloxseg4ei8_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vloxseg4ei8_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x4_t test_vloxseg4ei8_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vloxseg4ei8_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x4_t test_vloxseg4ei8_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vloxseg4ei8_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x4_t test_vloxseg4ei8_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei8_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c index fe3e4ab49..60bf7b383 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x5_t test_vloxseg5ei16_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei16_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei16_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei16_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei16_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei16_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei16_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei16_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei16_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei16_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei16_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei16_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei16_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei16_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei16_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei16_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei16_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei16_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei16_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei16_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei16_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei16_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei16_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei16_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei16_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei16_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei16_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei16_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei16_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei16_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei16_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei16_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei16_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei16_v_u8mf8x5_tu(vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei16_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei16_v_u8mf4x5_tu(vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei16_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei16_v_u8mf2x5_tu(vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei16_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei16_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei16_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei16_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei16_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei16_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei16_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei16_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei16_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei16_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei16_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei16_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei16_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei16_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei16_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei16_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei16_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei16_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei16_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei16_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei16_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei16_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei16_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei16_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei16_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei16_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei16_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei16_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei16_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei16_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei16_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei16_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei16_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei16_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei16_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei16_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei16_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei16_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei16_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei16_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei16_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei16_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei16_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei16_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei16_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei16_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei16_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei16_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei16_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei16_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei16_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei16_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei16_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei16_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei16_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei16_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei16_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei16_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei16_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei16_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei16_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei16_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei16_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei16_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei16_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei16_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei16_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei16_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei16_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei16_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei16_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei16_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei16_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei16_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei16_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei16_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei16_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei16_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei16_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei16_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei16_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei16_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei16_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei16_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei16_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei16_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei16_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei16_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei16_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei16_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei16_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei16_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei16_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei16_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei16_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei16_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei16_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei16_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei16_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei16_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei16_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei16_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei16_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei16_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei16_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei16_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei16_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei16_v_u16mf4x5_tumu(vbool64_t vm, + vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei16_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei16_v_u16mf2x5_tumu(vbool32_t vm, + vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei16_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei16_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei16_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei16_v_u32mf2x5_tumu(vbool64_t vm, + vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei16_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei16_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei16_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei16_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei16_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei16_v_f16mf4x5_mu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei16_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei16_v_f16mf2x5_mu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei16_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei16_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei16_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei16_v_f32mf2x5_mu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei16_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei16_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei16_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei16_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei16_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei16_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei16_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei16_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei16_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei16_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei16_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei16_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei16_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei16_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei16_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei16_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei16_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei16_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei16_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei16_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei16_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei16_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei16_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei16_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei16_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei16_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei16_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei16_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei16_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei16_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei16_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei16_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei16_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei16_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei16_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei16_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei16_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei16_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei16_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei16_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei16_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei16_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei16_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei16_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c index 6ed9764bb..7d8ea3433 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x5_t test_vloxseg5ei32_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei32_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei32_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei32_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei32_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei32_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei32_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei32_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei32_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei32_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei32_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei32_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei32_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei32_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei32_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei32_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei32_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei32_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei32_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei32_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei32_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei32_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei32_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei32_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei32_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei32_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei32_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei32_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei32_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei32_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei32_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei32_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei32_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei32_v_u8mf8x5_tu(vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei32_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei32_v_u8mf4x5_tu(vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei32_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei32_v_u8mf2x5_tu(vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei32_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei32_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei32_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei32_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei32_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei32_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei32_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei32_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei32_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei32_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei32_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei32_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei32_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei32_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei32_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei32_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei32_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei32_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei32_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei32_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei32_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei32_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei32_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei32_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei32_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei32_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei32_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei32_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei32_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei32_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei32_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei32_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei32_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei32_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei32_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei32_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei32_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei32_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei32_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei32_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei32_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei32_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei32_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei32_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei32_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei32_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei32_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei32_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei32_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei32_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei32_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei32_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei32_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei32_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei32_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei32_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei32_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei32_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei32_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei32_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei32_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei32_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei32_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei32_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei32_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei32_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei32_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei32_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei32_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei32_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei32_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei32_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei32_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei32_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei32_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei32_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei32_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei32_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei32_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei32_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei32_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei32_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei32_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei32_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei32_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei32_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei32_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei32_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei32_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei32_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei32_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei32_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei32_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei32_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei32_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei32_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei32_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei32_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei32_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei32_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei32_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei32_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei32_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei32_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei32_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei32_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei32_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei32_v_u16mf4x5_tumu(vbool64_t vm, + vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei32_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei32_v_u16mf2x5_tumu(vbool32_t vm, + vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei32_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei32_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei32_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei32_v_u32mf2x5_tumu(vbool64_t vm, + vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei32_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei32_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei32_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei32_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei32_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei32_v_f16mf4x5_mu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei32_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei32_v_f16mf2x5_mu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei32_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei32_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei32_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei32_v_f32mf2x5_mu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei32_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei32_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei32_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei32_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei32_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei32_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei32_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei32_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei32_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei32_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei32_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei32_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei32_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei32_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei32_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei32_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei32_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei32_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei32_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei32_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei32_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei32_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei32_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei32_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei32_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei32_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei32_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei32_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei32_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei32_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei32_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei32_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg5ei32_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei32_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei32_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei32_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei32_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei32_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei32_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei32_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei32_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei32_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei32_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei32_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei32_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei32_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c index f01b1e491..7132d189e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x5_t test_vloxseg5ei64_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei64_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei64_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei64_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei64_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei64_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei64_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei64_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei64_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei64_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei64_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei64_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei64_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei64_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei64_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei64_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei64_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei64_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei64_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei64_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei64_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei64_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei64_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei64_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei64_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei64_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei64_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei64_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei64_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei64_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei64_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei64_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei64_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei64_v_u8mf8x5_tu(vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei64_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei64_v_u8mf4x5_tu(vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei64_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei64_v_u8mf2x5_tu(vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei64_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei64_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei64_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei64_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei64_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei64_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei64_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei64_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei64_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei64_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei64_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei64_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei64_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei64_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei64_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei64_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei64_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei64_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei64_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei64_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei64_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei64_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei64_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei64_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei64_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei64_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei64_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei64_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei64_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei64_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei64_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei64_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei64_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei64_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei64_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei64_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei64_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei64_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei64_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei64_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei64_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei64_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei64_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei64_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei64_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei64_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei64_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei64_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei64_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei64_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei64_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei64_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei64_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei64_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei64_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei64_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei64_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei64_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei64_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei64_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei64_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei64_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei64_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei64_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei64_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei64_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei64_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei64_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei64_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei64_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei64_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei64_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei64_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei64_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei64_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei64_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei64_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei64_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei64_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei64_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei64_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei64_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei64_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei64_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei64_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei64_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei64_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei64_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei64_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei64_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei64_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei64_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei64_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei64_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei64_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei64_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei64_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei64_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei64_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei64_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei64_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei64_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei64_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei64_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei64_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei64_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei64_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei64_v_u16mf4x5_tumu(vbool64_t vm, + vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei64_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei64_v_u16mf2x5_tumu(vbool32_t vm, + vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei64_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei64_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei64_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei64_v_u32mf2x5_tumu(vbool64_t vm, + vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei64_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei64_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei64_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei64_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei64_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei64_v_f16mf4x5_mu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei64_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei64_v_f16mf2x5_mu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei64_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei64_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei64_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei64_v_f32mf2x5_mu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei64_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei64_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei64_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei64_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei64_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei64_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei64_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei64_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei64_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei64_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei64_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei64_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei64_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei64_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei64_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei64_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei64_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei64_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei64_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei64_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei64_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei64_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei64_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei64_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei64_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei64_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei64_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei64_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei64_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei64_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei64_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei64_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg5ei64_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei64_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei64_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei64_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei64_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei64_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei64_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei64_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei64_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei64_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei64_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei64_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei64_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg5ei64_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c index ea5fca199..b1c2addb9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c @@ -6,418 +6,624 @@ #include -vfloat16mf4x5_t test_vloxseg5ei8_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei8_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei8_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei8_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei8_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei8_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei8_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei8_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei8_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei8_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei8_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei8_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei8_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei8_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei8_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei8_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei8_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei8_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei8_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei8_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei8_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei8_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei8_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei8_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei8_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei8_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei8_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei8_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei8_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei8_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei8_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei8_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei8_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei8_v_u8mf8x5_tu(vuint8mf8x5_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei8_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei8_v_u8mf4x5_tu(vuint8mf4x5_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei8_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei8_v_u8mf2x5_tu(vuint8mf2x5_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei8_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei8_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei8_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei8_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei8_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei8_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei8_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei8_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei8_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei8_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei8_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei8_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei8_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei8_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei8_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei8_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei8_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei8_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei8_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei8_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei8_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei8_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei8_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei8_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei8_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei8_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei8_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei8_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei8_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei8_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei8_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei8_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei8_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei8_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei8_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei8_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei8_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei8_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei8_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei8_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei8_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei8_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei8_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei8_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei8_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei8_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei8_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei8_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei8_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei8_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei8_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei8_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei8_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei8_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei8_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei8_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei8_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei8_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei8_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei8_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei8_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei8_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei8_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei8_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei8_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei8_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei8_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei8_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei8_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei8_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei8_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei8_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei8_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei8_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei8_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei8_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei8_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei8_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei8_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei8_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei8_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei8_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei8_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei8_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei8_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei8_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei8_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei8_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei8_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei8_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei8_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei8_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei8_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei8_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei8_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei8_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei8_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei8_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei8_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei8_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei8_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei8_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei8_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei8_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei8_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei8_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei8_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei8_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei8_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei8_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei8_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei8_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei8_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei8_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei8_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei8_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei8_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei8_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vloxseg5ei8_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x5_t test_vloxseg5ei8_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vloxseg5ei8_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x5_t test_vloxseg5ei8_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vloxseg5ei8_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x5_t test_vloxseg5ei8_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vloxseg5ei8_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x5_t test_vloxseg5ei8_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vloxseg5ei8_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x5_t test_vloxseg5ei8_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vloxseg5ei8_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x5_t test_vloxseg5ei8_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vloxseg5ei8_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x5_t test_vloxseg5ei8_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vloxseg5ei8_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x5_t test_vloxseg5ei8_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vloxseg5ei8_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x5_t test_vloxseg5ei8_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vloxseg5ei8_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x5_t test_vloxseg5ei8_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vloxseg5ei8_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x5_t test_vloxseg5ei8_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vloxseg5ei8_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x5_t test_vloxseg5ei8_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vloxseg5ei8_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x5_t test_vloxseg5ei8_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vloxseg5ei8_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x5_t test_vloxseg5ei8_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vloxseg5ei8_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x5_t test_vloxseg5ei8_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vloxseg5ei8_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x5_t test_vloxseg5ei8_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vloxseg5ei8_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x5_t test_vloxseg5ei8_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vloxseg5ei8_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x5_t test_vloxseg5ei8_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vloxseg5ei8_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x5_t test_vloxseg5ei8_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vloxseg5ei8_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x5_t test_vloxseg5ei8_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg5ei8_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vloxseg5ei8_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x5_t test_vloxseg5ei8_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vloxseg5ei8_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x5_t test_vloxseg5ei8_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vloxseg5ei8_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x5_t test_vloxseg5ei8_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vloxseg5ei8_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x5_t test_vloxseg5ei8_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vloxseg5ei8_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x5_t test_vloxseg5ei8_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vloxseg5ei8_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x5_t test_vloxseg5ei8_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg5ei8_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c index 4201a2c15..25627fb57 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x6_t test_vloxseg6ei16_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei16_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei16_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei16_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei16_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei16_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei16_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei16_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei16_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei16_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei16_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei16_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei16_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei16_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei16_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei16_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei16_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei16_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei16_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei16_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei16_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei16_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei16_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei16_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei16_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei16_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei16_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei16_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei16_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei16_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei16_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei16_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei16_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei16_v_u8mf8x6_tu(vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei16_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei16_v_u8mf4x6_tu(vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei16_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei16_v_u8mf2x6_tu(vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei16_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei16_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei16_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei16_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei16_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei16_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei16_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei16_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei16_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei16_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei16_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei16_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei16_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei16_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei16_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei16_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei16_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei16_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei16_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei16_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei16_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei16_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei16_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei16_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei16_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei16_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei16_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei16_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei16_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei16_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei16_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei16_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei16_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei16_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei16_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei16_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei16_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei16_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei16_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei16_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei16_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei16_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei16_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei16_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei16_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei16_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei16_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei16_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei16_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei16_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei16_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei16_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei16_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei16_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei16_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei16_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei16_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei16_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei16_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei16_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei16_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei16_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei16_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei16_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei16_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei16_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei16_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei16_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei16_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei16_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei16_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei16_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei16_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei16_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei16_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei16_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei16_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei16_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei16_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei16_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei16_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei16_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei16_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei16_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei16_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei16_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei16_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei16_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei16_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei16_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei16_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei16_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei16_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei16_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei16_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei16_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei16_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei16_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei16_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei16_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei16_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei16_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei16_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei16_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei16_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei16_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei16_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei16_v_u16mf4x6_tumu(vbool64_t vm, + vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei16_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei16_v_u16mf2x6_tumu(vbool32_t vm, + vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei16_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei16_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei16_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei16_v_u32mf2x6_tumu(vbool64_t vm, + vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei16_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei16_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei16_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei16_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei16_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei16_v_f16mf4x6_mu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei16_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei16_v_f16mf2x6_mu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei16_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei16_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei16_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei16_v_f32mf2x6_mu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei16_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei16_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei16_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei16_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei16_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei16_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei16_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei16_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei16_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei16_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei16_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei16_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei16_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei16_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei16_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei16_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei16_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei16_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei16_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei16_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei16_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei16_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei16_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei16_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei16_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei16_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei16_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei16_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei16_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei16_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei16_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei16_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei16_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei16_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei16_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei16_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei16_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei16_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei16_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei16_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei16_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei16_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei16_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei16_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c index 49986992e..908fc5745 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x6_t test_vloxseg6ei32_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei32_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei32_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei32_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei32_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei32_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei32_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei32_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei32_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei32_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei32_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei32_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei32_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei32_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei32_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei32_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei32_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei32_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei32_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei32_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei32_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei32_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei32_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei32_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei32_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei32_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei32_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei32_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei32_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei32_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei32_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei32_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei32_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei32_v_u8mf8x6_tu(vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei32_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei32_v_u8mf4x6_tu(vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei32_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei32_v_u8mf2x6_tu(vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei32_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei32_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei32_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei32_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei32_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei32_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei32_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei32_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei32_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei32_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei32_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei32_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei32_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei32_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei32_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei32_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei32_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei32_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei32_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei32_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei32_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei32_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei32_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei32_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei32_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei32_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei32_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei32_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei32_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei32_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei32_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei32_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei32_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei32_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei32_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei32_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei32_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei32_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei32_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei32_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei32_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei32_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei32_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei32_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei32_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei32_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei32_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei32_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei32_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei32_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei32_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei32_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei32_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei32_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei32_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei32_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei32_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei32_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei32_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei32_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei32_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei32_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei32_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei32_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei32_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei32_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei32_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei32_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei32_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei32_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei32_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei32_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei32_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei32_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei32_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei32_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei32_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei32_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei32_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei32_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei32_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei32_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei32_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei32_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei32_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei32_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei32_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei32_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei32_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei32_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei32_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei32_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei32_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei32_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei32_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei32_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei32_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei32_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei32_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei32_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei32_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei32_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei32_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei32_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei32_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei32_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei32_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei32_v_u16mf4x6_tumu(vbool64_t vm, + vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei32_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei32_v_u16mf2x6_tumu(vbool32_t vm, + vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei32_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei32_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei32_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei32_v_u32mf2x6_tumu(vbool64_t vm, + vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei32_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei32_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei32_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei32_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei32_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei32_v_f16mf4x6_mu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei32_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei32_v_f16mf2x6_mu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei32_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei32_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei32_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei32_v_f32mf2x6_mu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei32_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei32_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei32_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei32_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei32_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei32_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei32_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei32_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei32_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei32_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei32_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei32_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei32_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei32_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei32_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei32_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei32_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei32_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei32_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei32_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei32_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei32_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei32_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei32_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei32_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei32_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei32_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei32_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei32_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei32_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei32_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei32_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg6ei32_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei32_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei32_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei32_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei32_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei32_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei32_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei32_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei32_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei32_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei32_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei32_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei32_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei32_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c index d0c41237b..3ff9b7bfb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x6_t test_vloxseg6ei64_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei64_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei64_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei64_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei64_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei64_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei64_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei64_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei64_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei64_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei64_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei64_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei64_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei64_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei64_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei64_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei64_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei64_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei64_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei64_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei64_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei64_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei64_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei64_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei64_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei64_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei64_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei64_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei64_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei64_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei64_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei64_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei64_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei64_v_u8mf8x6_tu(vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei64_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei64_v_u8mf4x6_tu(vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei64_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei64_v_u8mf2x6_tu(vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei64_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei64_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei64_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei64_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei64_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei64_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei64_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei64_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei64_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei64_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei64_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei64_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei64_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei64_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei64_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei64_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei64_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei64_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei64_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei64_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei64_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei64_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei64_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei64_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei64_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei64_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei64_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei64_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei64_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei64_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei64_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei64_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei64_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei64_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei64_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei64_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei64_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei64_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei64_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei64_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei64_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei64_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei64_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei64_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei64_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei64_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei64_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei64_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei64_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei64_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei64_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei64_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei64_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei64_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei64_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei64_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei64_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei64_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei64_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei64_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei64_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei64_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei64_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei64_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei64_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei64_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei64_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei64_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei64_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei64_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei64_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei64_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei64_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei64_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei64_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei64_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei64_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei64_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei64_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei64_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei64_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei64_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei64_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei64_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei64_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei64_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei64_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei64_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei64_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei64_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei64_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei64_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei64_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei64_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei64_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei64_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei64_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei64_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei64_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei64_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei64_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei64_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei64_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei64_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei64_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei64_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei64_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei64_v_u16mf4x6_tumu(vbool64_t vm, + vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei64_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei64_v_u16mf2x6_tumu(vbool32_t vm, + vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei64_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei64_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei64_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei64_v_u32mf2x6_tumu(vbool64_t vm, + vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei64_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei64_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei64_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei64_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei64_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei64_v_f16mf4x6_mu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei64_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei64_v_f16mf2x6_mu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei64_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei64_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei64_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei64_v_f32mf2x6_mu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei64_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei64_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei64_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei64_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei64_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei64_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei64_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei64_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei64_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei64_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei64_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei64_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei64_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei64_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei64_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei64_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei64_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei64_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei64_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei64_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei64_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei64_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei64_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei64_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei64_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei64_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei64_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei64_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei64_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei64_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei64_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei64_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg6ei64_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei64_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei64_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei64_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei64_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei64_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei64_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei64_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei64_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei64_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei64_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei64_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei64_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg6ei64_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c index 4166f37ad..4504fd1a2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c @@ -6,418 +6,624 @@ #include -vfloat16mf4x6_t test_vloxseg6ei8_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei8_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei8_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei8_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei8_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei8_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei8_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei8_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei8_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei8_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei8_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei8_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei8_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei8_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei8_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei8_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei8_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei8_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei8_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei8_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei8_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei8_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei8_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei8_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei8_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei8_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei8_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei8_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei8_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei8_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei8_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei8_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei8_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei8_v_u8mf8x6_tu(vuint8mf8x6_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei8_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei8_v_u8mf4x6_tu(vuint8mf4x6_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei8_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei8_v_u8mf2x6_tu(vuint8mf2x6_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei8_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei8_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei8_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei8_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei8_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei8_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei8_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei8_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei8_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei8_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei8_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei8_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei8_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei8_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei8_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei8_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei8_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei8_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei8_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei8_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei8_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei8_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei8_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei8_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei8_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei8_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei8_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei8_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei8_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei8_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei8_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei8_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei8_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei8_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei8_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei8_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei8_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei8_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei8_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei8_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei8_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei8_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei8_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei8_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei8_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei8_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei8_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei8_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei8_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei8_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei8_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei8_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei8_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei8_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei8_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei8_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei8_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei8_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei8_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei8_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei8_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei8_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei8_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei8_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei8_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei8_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei8_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei8_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei8_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei8_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei8_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei8_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei8_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei8_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei8_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei8_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei8_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei8_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei8_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei8_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei8_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei8_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei8_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei8_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei8_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei8_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei8_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei8_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei8_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei8_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei8_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei8_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei8_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei8_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei8_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei8_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei8_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei8_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei8_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei8_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei8_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei8_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei8_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei8_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei8_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei8_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei8_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei8_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei8_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei8_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei8_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei8_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei8_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei8_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei8_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei8_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei8_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei8_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vloxseg6ei8_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x6_t test_vloxseg6ei8_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vloxseg6ei8_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x6_t test_vloxseg6ei8_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vloxseg6ei8_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x6_t test_vloxseg6ei8_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vloxseg6ei8_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x6_t test_vloxseg6ei8_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vloxseg6ei8_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x6_t test_vloxseg6ei8_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vloxseg6ei8_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x6_t test_vloxseg6ei8_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vloxseg6ei8_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x6_t test_vloxseg6ei8_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vloxseg6ei8_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x6_t test_vloxseg6ei8_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vloxseg6ei8_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x6_t test_vloxseg6ei8_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vloxseg6ei8_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x6_t test_vloxseg6ei8_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vloxseg6ei8_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x6_t test_vloxseg6ei8_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vloxseg6ei8_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x6_t test_vloxseg6ei8_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vloxseg6ei8_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x6_t test_vloxseg6ei8_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vloxseg6ei8_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x6_t test_vloxseg6ei8_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vloxseg6ei8_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x6_t test_vloxseg6ei8_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vloxseg6ei8_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x6_t test_vloxseg6ei8_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vloxseg6ei8_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x6_t test_vloxseg6ei8_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vloxseg6ei8_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x6_t test_vloxseg6ei8_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vloxseg6ei8_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x6_t test_vloxseg6ei8_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vloxseg6ei8_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x6_t test_vloxseg6ei8_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg6ei8_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vloxseg6ei8_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x6_t test_vloxseg6ei8_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vloxseg6ei8_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x6_t test_vloxseg6ei8_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vloxseg6ei8_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x6_t test_vloxseg6ei8_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vloxseg6ei8_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x6_t test_vloxseg6ei8_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vloxseg6ei8_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x6_t test_vloxseg6ei8_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vloxseg6ei8_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x6_t test_vloxseg6ei8_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg6ei8_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c index f29377190..7b7d9eb09 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x7_t test_vloxseg7ei16_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei16_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei16_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei16_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei16_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei16_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei16_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei16_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei16_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei16_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei16_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei16_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei16_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei16_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei16_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei16_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei16_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei16_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei16_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei16_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei16_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei16_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei16_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei16_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei16_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei16_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei16_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei16_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei16_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei16_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei16_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei16_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei16_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei16_v_u8mf8x7_tu(vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei16_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei16_v_u8mf4x7_tu(vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei16_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei16_v_u8mf2x7_tu(vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei16_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei16_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei16_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei16_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei16_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei16_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei16_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei16_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei16_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei16_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei16_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei16_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei16_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei16_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei16_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei16_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei16_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei16_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei16_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei16_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei16_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei16_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei16_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei16_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei16_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei16_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei16_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei16_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei16_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei16_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei16_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei16_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei16_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei16_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei16_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei16_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei16_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei16_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei16_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei16_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei16_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei16_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei16_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei16_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei16_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei16_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei16_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei16_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei16_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei16_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei16_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei16_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei16_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei16_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei16_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei16_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei16_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei16_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei16_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei16_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei16_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei16_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei16_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei16_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei16_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei16_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei16_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei16_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei16_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei16_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei16_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei16_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei16_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei16_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei16_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei16_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei16_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei16_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei16_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei16_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei16_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei16_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei16_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei16_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei16_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei16_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei16_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei16_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei16_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei16_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei16_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei16_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei16_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei16_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei16_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei16_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei16_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei16_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei16_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei16_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei16_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei16_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei16_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei16_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei16_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei16_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei16_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei16_v_u16mf4x7_tumu(vbool64_t vm, + vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei16_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei16_v_u16mf2x7_tumu(vbool32_t vm, + vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei16_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei16_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei16_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei16_v_u32mf2x7_tumu(vbool64_t vm, + vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei16_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei16_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei16_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei16_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei16_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei16_v_f16mf4x7_mu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei16_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei16_v_f16mf2x7_mu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei16_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei16_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei16_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei16_v_f32mf2x7_mu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei16_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei16_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei16_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei16_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei16_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei16_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei16_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei16_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei16_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei16_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei16_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei16_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei16_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei16_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei16_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei16_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei16_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei16_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei16_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei16_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei16_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei16_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei16_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei16_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei16_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei16_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei16_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei16_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei16_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei16_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei16_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei16_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei16_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei16_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei16_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei16_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei16_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei16_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei16_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei16_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei16_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei16_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei16_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei16_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c index cb724a9f7..ca3f42e67 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x7_t test_vloxseg7ei32_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei32_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei32_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei32_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei32_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei32_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei32_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei32_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei32_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei32_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei32_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei32_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei32_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei32_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei32_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei32_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei32_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei32_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei32_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei32_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei32_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei32_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei32_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei32_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei32_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei32_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei32_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei32_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei32_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei32_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei32_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei32_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei32_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei32_v_u8mf8x7_tu(vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei32_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei32_v_u8mf4x7_tu(vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei32_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei32_v_u8mf2x7_tu(vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei32_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei32_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei32_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei32_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei32_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei32_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei32_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei32_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei32_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei32_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei32_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei32_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei32_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei32_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei32_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei32_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei32_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei32_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei32_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei32_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei32_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei32_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei32_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei32_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei32_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei32_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei32_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei32_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei32_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei32_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei32_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei32_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei32_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei32_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei32_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei32_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei32_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei32_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei32_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei32_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei32_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei32_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei32_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei32_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei32_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei32_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei32_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei32_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei32_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei32_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei32_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei32_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei32_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei32_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei32_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei32_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei32_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei32_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei32_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei32_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei32_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei32_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei32_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei32_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei32_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei32_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei32_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei32_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei32_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei32_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei32_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei32_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei32_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei32_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei32_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei32_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei32_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei32_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei32_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei32_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei32_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei32_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei32_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei32_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei32_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei32_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei32_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei32_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei32_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei32_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei32_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei32_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei32_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei32_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei32_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei32_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei32_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei32_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei32_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei32_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei32_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei32_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei32_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei32_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei32_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei32_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei32_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei32_v_u16mf4x7_tumu(vbool64_t vm, + vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei32_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei32_v_u16mf2x7_tumu(vbool32_t vm, + vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei32_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei32_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei32_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei32_v_u32mf2x7_tumu(vbool64_t vm, + vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei32_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei32_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei32_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei32_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei32_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei32_v_f16mf4x7_mu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei32_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei32_v_f16mf2x7_mu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei32_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei32_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei32_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei32_v_f32mf2x7_mu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei32_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei32_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei32_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei32_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei32_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei32_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei32_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei32_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei32_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei32_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei32_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei32_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei32_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei32_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei32_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei32_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei32_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei32_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei32_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei32_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei32_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei32_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei32_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei32_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei32_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei32_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei32_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei32_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei32_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei32_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei32_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei32_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg7ei32_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei32_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei32_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei32_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei32_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei32_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei32_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei32_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei32_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei32_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei32_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei32_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei32_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei32_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c index 532c4085a..3e847e4ff 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x7_t test_vloxseg7ei64_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei64_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei64_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei64_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei64_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei64_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei64_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei64_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei64_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei64_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei64_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei64_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei64_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei64_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei64_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei64_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei64_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei64_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei64_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei64_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei64_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei64_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei64_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei64_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei64_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei64_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei64_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei64_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei64_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei64_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei64_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei64_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei64_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei64_v_u8mf8x7_tu(vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei64_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei64_v_u8mf4x7_tu(vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei64_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei64_v_u8mf2x7_tu(vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei64_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei64_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei64_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei64_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei64_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei64_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei64_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei64_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei64_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei64_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei64_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei64_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei64_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei64_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei64_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei64_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei64_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei64_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei64_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei64_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei64_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei64_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei64_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei64_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei64_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei64_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei64_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei64_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei64_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei64_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei64_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei64_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei64_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei64_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei64_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei64_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei64_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei64_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei64_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei64_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei64_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei64_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei64_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei64_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei64_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei64_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei64_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei64_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei64_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei64_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei64_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei64_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei64_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei64_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei64_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei64_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei64_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei64_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei64_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei64_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei64_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei64_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei64_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei64_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei64_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei64_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei64_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei64_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei64_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei64_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei64_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei64_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei64_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei64_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei64_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei64_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei64_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei64_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei64_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei64_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei64_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei64_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei64_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei64_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei64_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei64_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei64_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei64_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei64_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei64_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei64_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei64_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei64_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei64_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei64_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei64_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei64_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei64_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei64_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei64_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei64_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei64_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei64_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei64_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei64_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei64_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei64_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei64_v_u16mf4x7_tumu(vbool64_t vm, + vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei64_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei64_v_u16mf2x7_tumu(vbool32_t vm, + vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei64_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei64_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei64_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei64_v_u32mf2x7_tumu(vbool64_t vm, + vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei64_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei64_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei64_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei64_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei64_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei64_v_f16mf4x7_mu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei64_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei64_v_f16mf2x7_mu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei64_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei64_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei64_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei64_v_f32mf2x7_mu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei64_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei64_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei64_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei64_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei64_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei64_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei64_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei64_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei64_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei64_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei64_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei64_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei64_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei64_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei64_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei64_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei64_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei64_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei64_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei64_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei64_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei64_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei64_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei64_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei64_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei64_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei64_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei64_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei64_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei64_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei64_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei64_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg7ei64_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei64_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei64_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei64_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei64_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei64_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei64_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei64_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei64_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei64_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei64_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei64_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei64_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg7ei64_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c index 6f569d4ff..d9bc1a88e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c @@ -6,418 +6,624 @@ #include -vfloat16mf4x7_t test_vloxseg7ei8_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei8_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei8_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei8_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei8_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei8_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei8_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei8_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei8_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei8_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei8_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei8_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei8_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei8_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei8_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei8_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei8_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei8_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei8_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei8_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei8_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei8_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei8_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei8_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei8_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei8_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei8_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei8_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei8_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei8_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei8_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei8_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei8_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei8_v_u8mf8x7_tu(vuint8mf8x7_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei8_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei8_v_u8mf4x7_tu(vuint8mf4x7_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei8_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei8_v_u8mf2x7_tu(vuint8mf2x7_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei8_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei8_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei8_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei8_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei8_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei8_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei8_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei8_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei8_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei8_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei8_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei8_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei8_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei8_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei8_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei8_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei8_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei8_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei8_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei8_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei8_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei8_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei8_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei8_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei8_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei8_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei8_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei8_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei8_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei8_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei8_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei8_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei8_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei8_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei8_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei8_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei8_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei8_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei8_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei8_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei8_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei8_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei8_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei8_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei8_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei8_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei8_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei8_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei8_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei8_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei8_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei8_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei8_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei8_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei8_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei8_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei8_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei8_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei8_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei8_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei8_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei8_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei8_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei8_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei8_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei8_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei8_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei8_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei8_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei8_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei8_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei8_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei8_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei8_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei8_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei8_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei8_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei8_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei8_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei8_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei8_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei8_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei8_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei8_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei8_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei8_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei8_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei8_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei8_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei8_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei8_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei8_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei8_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei8_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei8_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei8_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei8_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei8_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei8_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei8_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei8_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei8_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei8_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei8_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei8_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei8_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei8_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei8_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei8_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei8_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei8_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei8_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei8_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei8_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei8_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei8_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei8_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei8_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vloxseg7ei8_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x7_t test_vloxseg7ei8_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vloxseg7ei8_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x7_t test_vloxseg7ei8_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vloxseg7ei8_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x7_t test_vloxseg7ei8_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vloxseg7ei8_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x7_t test_vloxseg7ei8_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vloxseg7ei8_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x7_t test_vloxseg7ei8_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vloxseg7ei8_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x7_t test_vloxseg7ei8_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vloxseg7ei8_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x7_t test_vloxseg7ei8_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vloxseg7ei8_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x7_t test_vloxseg7ei8_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vloxseg7ei8_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x7_t test_vloxseg7ei8_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vloxseg7ei8_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x7_t test_vloxseg7ei8_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vloxseg7ei8_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x7_t test_vloxseg7ei8_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vloxseg7ei8_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x7_t test_vloxseg7ei8_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vloxseg7ei8_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x7_t test_vloxseg7ei8_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vloxseg7ei8_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x7_t test_vloxseg7ei8_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vloxseg7ei8_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x7_t test_vloxseg7ei8_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vloxseg7ei8_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x7_t test_vloxseg7ei8_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vloxseg7ei8_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x7_t test_vloxseg7ei8_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vloxseg7ei8_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x7_t test_vloxseg7ei8_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vloxseg7ei8_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x7_t test_vloxseg7ei8_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vloxseg7ei8_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x7_t test_vloxseg7ei8_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg7ei8_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vloxseg7ei8_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x7_t test_vloxseg7ei8_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vloxseg7ei8_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x7_t test_vloxseg7ei8_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vloxseg7ei8_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x7_t test_vloxseg7ei8_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vloxseg7ei8_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x7_t test_vloxseg7ei8_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vloxseg7ei8_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x7_t test_vloxseg7ei8_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vloxseg7ei8_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x7_t test_vloxseg7ei8_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg7ei8_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c index dcd7631eb..c766ace0a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x8_t test_vloxseg8ei16_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei16_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei16_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei16_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei16_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei16_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei16_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei16_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei16_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei16_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei16_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei16_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei16_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei16_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei16_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei16_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei16_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei16_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei16_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei16_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei16_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei16_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei16_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei16_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei16_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei16_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei16_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei16_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei16_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei16_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei16_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei16_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei16_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei16_v_u8mf8x8_tu(vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei16_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei16_v_u8mf4x8_tu(vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei16_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei16_v_u8mf2x8_tu(vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei16_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei16_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei16_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei16_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei16_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei16_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei16_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei16_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei16_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei16_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei16_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei16_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei16_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei16_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei16_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei16_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei16_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei16_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei16_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei16_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei16_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei16_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei16_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei16_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei16_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei16_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei16_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei16_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei16_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei16_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei16_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei16_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei16_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei16_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei16_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei16_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei16_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei16_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei16_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei16_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei16_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei16_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei16_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei16_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei16_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei16_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei16_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei16_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei16_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei16_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei16_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei16_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei16_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei16_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei16_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei16_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei16_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei16_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei16_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei16_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei16_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei16_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei16_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei16_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei16_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei16_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei16_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei16_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei16_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei16_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei16_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei16_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei16_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei16_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei16_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei16_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei16_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei16_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei16_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei16_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei16_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei16_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei16_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei16_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei16_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei16_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei16_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei16_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei16_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei16_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei16_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei16_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei16_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei16_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei16_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei16_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei16_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei16_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei16_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei16_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei16_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei16_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei16_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei16_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei16_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei16_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei16_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei16_v_u16mf4x8_tumu(vbool64_t vm, + vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei16_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei16_v_u16mf2x8_tumu(vbool32_t vm, + vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei16_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei16_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei16_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei16_v_u32mf2x8_tumu(vbool64_t vm, + vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei16_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei16_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei16_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei16_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei16_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei16_v_f16mf4x8_mu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei16_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei16_v_f16mf2x8_mu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei16_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei16_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei16_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei16_v_f32mf2x8_mu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei16_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei16_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei16_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei16_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei16_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei16_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei16_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei16_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei16_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei16_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei16_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei16_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei16_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei16_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei16_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei16_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei16_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei16_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei16_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei16_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei16_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei16_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei16_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei16_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei16_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei16_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei16_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei16_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei16_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei16_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei16_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei16_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei16_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei16_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei16_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei16_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei16_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei16_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei16_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei16_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei16_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei16_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei16_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei16_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c index 39af07120..cedbfadc3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x8_t test_vloxseg8ei32_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei32_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei32_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei32_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei32_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei32_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei32_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei32_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei32_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei32_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei32_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei32_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei32_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei32_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei32_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei32_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei32_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei32_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei32_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei32_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei32_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei32_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei32_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei32_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei32_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei32_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei32_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei32_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei32_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei32_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei32_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei32_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei32_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei32_v_u8mf8x8_tu(vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei32_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei32_v_u8mf4x8_tu(vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei32_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei32_v_u8mf2x8_tu(vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei32_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei32_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei32_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei32_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei32_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei32_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei32_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei32_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei32_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei32_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei32_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei32_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei32_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei32_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei32_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei32_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei32_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei32_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei32_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei32_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei32_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei32_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei32_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei32_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei32_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei32_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei32_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei32_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei32_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei32_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei32_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei32_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei32_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei32_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei32_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei32_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei32_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei32_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei32_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei32_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei32_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei32_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei32_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei32_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei32_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei32_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei32_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei32_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei32_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei32_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei32_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei32_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei32_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei32_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei32_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei32_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei32_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei32_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei32_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei32_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei32_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei32_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei32_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei32_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei32_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei32_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei32_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei32_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei32_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei32_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei32_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei32_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei32_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei32_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei32_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei32_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei32_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei32_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei32_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei32_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei32_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei32_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei32_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei32_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei32_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei32_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei32_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei32_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei32_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei32_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei32_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei32_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei32_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei32_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei32_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei32_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei32_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei32_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei32_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei32_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei32_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei32_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei32_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei32_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei32_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei32_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei32_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei32_v_u16mf4x8_tumu(vbool64_t vm, + vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei32_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei32_v_u16mf2x8_tumu(vbool32_t vm, + vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei32_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei32_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei32_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei32_v_u32mf2x8_tumu(vbool64_t vm, + vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei32_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei32_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei32_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei32_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei32_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei32_v_f16mf4x8_mu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei32_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei32_v_f16mf2x8_mu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei32_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei32_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei32_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei32_v_f32mf2x8_mu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei32_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei32_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei32_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei32_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei32_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei32_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei32_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei32_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei32_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei32_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei32_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei32_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei32_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei32_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei32_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei32_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei32_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei32_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei32_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei32_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei32_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei32_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei32_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei32_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei32_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei32_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei32_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei32_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei32_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei32_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei32_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei32_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vloxseg8ei32_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei32_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei32_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei32_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei32_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei32_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei32_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei32_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei32_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei32_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei32_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei32_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei32_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei32_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c index e7f6030e1..2081c7820 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x8_t test_vloxseg8ei64_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei64_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei64_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei64_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei64_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei64_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei64_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei64_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei64_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei64_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei64_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei64_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei64_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei64_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei64_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei64_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei64_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei64_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei64_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei64_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei64_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei64_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei64_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei64_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei64_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei64_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei64_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei64_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei64_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei64_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei64_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei64_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei64_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei64_v_u8mf8x8_tu(vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei64_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei64_v_u8mf4x8_tu(vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei64_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei64_v_u8mf2x8_tu(vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei64_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei64_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei64_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei64_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei64_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei64_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei64_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei64_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei64_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei64_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei64_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei64_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei64_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei64_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei64_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei64_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei64_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei64_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei64_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei64_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei64_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei64_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei64_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei64_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei64_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei64_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei64_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei64_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei64_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei64_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei64_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei64_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei64_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei64_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei64_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei64_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei64_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei64_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei64_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei64_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei64_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei64_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei64_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei64_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei64_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei64_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei64_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei64_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei64_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei64_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei64_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei64_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei64_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei64_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei64_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei64_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei64_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei64_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei64_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei64_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei64_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei64_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei64_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei64_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei64_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei64_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei64_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei64_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei64_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei64_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei64_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei64_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei64_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei64_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei64_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei64_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei64_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei64_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei64_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei64_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei64_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei64_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei64_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei64_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei64_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei64_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei64_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei64_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei64_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei64_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei64_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei64_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei64_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei64_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei64_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei64_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei64_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei64_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei64_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei64_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei64_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei64_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei64_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei64_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei64_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei64_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei64_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei64_v_u16mf4x8_tumu(vbool64_t vm, + vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei64_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei64_v_u16mf2x8_tumu(vbool32_t vm, + vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei64_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei64_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei64_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei64_v_u32mf2x8_tumu(vbool64_t vm, + vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei64_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei64_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei64_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei64_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei64_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei64_v_f16mf4x8_mu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei64_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei64_v_f16mf2x8_mu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei64_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei64_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei64_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei64_v_f32mf2x8_mu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei64_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei64_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei64_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei64_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei64_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei64_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei64_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei64_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei64_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei64_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei64_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei64_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei64_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei64_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei64_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei64_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei64_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei64_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei64_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei64_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei64_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei64_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei64_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei64_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei64_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei64_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei64_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei64_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei64_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei64_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei64_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei64_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vloxseg8ei64_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei64_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei64_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei64_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei64_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei64_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei64_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei64_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei64_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei64_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei64_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei64_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei64_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vloxseg8ei64_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c index ec6413c3c..a7851c56e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c @@ -6,418 +6,624 @@ #include -vfloat16mf4x8_t test_vloxseg8ei8_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei8_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei8_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei8_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei8_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei8_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei8_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei8_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei8_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei8_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei8_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei8_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei8_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei8_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei8_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei8_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei8_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei8_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei8_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei8_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei8_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei8_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei8_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei8_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei8_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei8_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei8_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei8_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei8_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei8_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei8_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei8_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei8_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei8_v_u8mf8x8_tu(vuint8mf8x8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei8_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei8_v_u8mf4x8_tu(vuint8mf4x8_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei8_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei8_v_u8mf2x8_tu(vuint8mf2x8_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei8_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei8_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei8_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei8_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei8_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei8_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei8_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei8_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei8_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei8_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei8_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei8_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei8_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei8_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei8_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei8_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei8_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei8_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei8_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei8_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei8_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei8_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei8_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei8_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei8_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei8_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei8_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei8_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei8_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei8_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei8_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei8_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei8_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei8_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei8_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei8_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei8_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei8_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei8_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei8_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei8_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei8_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei8_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei8_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei8_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei8_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei8_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei8_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei8_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei8_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei8_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei8_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei8_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei8_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei8_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei8_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei8_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei8_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei8_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei8_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei8_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei8_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei8_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei8_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei8_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei8_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei8_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei8_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei8_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei8_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei8_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei8_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei8_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei8_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei8_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei8_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei8_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei8_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei8_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei8_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei8_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei8_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei8_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei8_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei8_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei8_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei8_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei8_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei8_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei8_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei8_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei8_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei8_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei8_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei8_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei8_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei8_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei8_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei8_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei8_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei8_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei8_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei8_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei8_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei8_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei8_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei8_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei8_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei8_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei8_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei8_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei8_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei8_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei8_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei8_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei8_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei8_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei8_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vloxseg8ei8_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x8_t test_vloxseg8ei8_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vloxseg8ei8_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x8_t test_vloxseg8ei8_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vloxseg8ei8_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x8_t test_vloxseg8ei8_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vloxseg8ei8_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x8_t test_vloxseg8ei8_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vloxseg8ei8_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x8_t test_vloxseg8ei8_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vloxseg8ei8_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x8_t test_vloxseg8ei8_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vloxseg8ei8_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x8_t test_vloxseg8ei8_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vloxseg8ei8_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x8_t test_vloxseg8ei8_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vloxseg8ei8_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x8_t test_vloxseg8ei8_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vloxseg8ei8_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x8_t test_vloxseg8ei8_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vloxseg8ei8_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x8_t test_vloxseg8ei8_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vloxseg8ei8_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x8_t test_vloxseg8ei8_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vloxseg8ei8_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x8_t test_vloxseg8ei8_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vloxseg8ei8_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x8_t test_vloxseg8ei8_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vloxseg8ei8_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x8_t test_vloxseg8ei8_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vloxseg8ei8_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x8_t test_vloxseg8ei8_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vloxseg8ei8_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x8_t test_vloxseg8ei8_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vloxseg8ei8_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x8_t test_vloxseg8ei8_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vloxseg8ei8_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x8_t test_vloxseg8ei8_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vloxseg8ei8_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x8_t test_vloxseg8ei8_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vloxseg8ei8_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vloxseg8ei8_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x8_t test_vloxseg8ei8_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vloxseg8ei8_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x8_t test_vloxseg8ei8_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vloxseg8ei8_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x8_t test_vloxseg8ei8_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vloxseg8ei8_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x8_t test_vloxseg8ei8_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vloxseg8ei8_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x8_t test_vloxseg8ei8_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vloxseg8ei8_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x8_t test_vloxseg8ei8_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vloxseg8ei8_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlse16.c b/auto-generated/policy_funcs/llvm-api-tests/vlse16.c index fe849b9d8..0ed276748 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlse16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlse16.c @@ -6,290 +6,416 @@ #include -vfloat16mf4_t test_vlse16_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4_t test_vlse16_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vlse16_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2_t test_vlse16_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vlse16_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1_t test_vlse16_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vlse16_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2_t test_vlse16_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat16m4_t test_vlse16_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m4_t test_vlse16_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_f16m4_tu(vd, rs1, rs2, vl); } -vfloat16m8_t test_vlse16_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m8_t test_vlse16_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_f16m8_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vlse16_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4_t test_vlse16_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vlse16_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2_t test_vlse16_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vlse16_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1_t test_vlse16_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vlse16_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2_t test_vlse16_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_i16m2_tu(vd, rs1, rs2, vl); } -vint16m4_t test_vlse16_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m4_t test_vlse16_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_i16m4_tu(vd, rs1, rs2, vl); } -vint16m8_t test_vlse16_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m8_t test_vlse16_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_i16m8_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vlse16_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4_t test_vlse16_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vlse16_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2_t test_vlse16_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vlse16_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1_t test_vlse16_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vlse16_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2_t test_vlse16_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint16m4_t test_vlse16_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m4_t test_vlse16_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_u16m4_tu(vd, rs1, rs2, vl); } -vuint16m8_t test_vlse16_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m8_t test_vlse16_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_u16m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vlse16_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4_t test_vlse16_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vlse16_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2_t test_vlse16_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vlse16_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1_t test_vlse16_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vlse16_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2_t test_vlse16_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vlse16_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m4_t test_vlse16_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vlse16_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m8_t test_vlse16_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vlse16_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4_t test_vlse16_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vlse16_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2_t test_vlse16_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vlse16_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1_t test_vlse16_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vlse16_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2_t test_vlse16_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vlse16_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m4_t test_vlse16_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m4_tum(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vlse16_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m8_t test_vlse16_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vlse16_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4_t test_vlse16_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vlse16_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2_t test_vlse16_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vlse16_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1_t test_vlse16_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vlse16_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2_t test_vlse16_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vlse16_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m4_t test_vlse16_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m4_tum(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vlse16_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m8_t test_vlse16_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vlse16_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4_t test_vlse16_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vlse16_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2_t test_vlse16_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vlse16_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1_t test_vlse16_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vlse16_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2_t test_vlse16_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vlse16_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m4_t test_vlse16_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vlse16_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m8_t test_vlse16_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vlse16_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4_t test_vlse16_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vlse16_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2_t test_vlse16_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vlse16_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1_t test_vlse16_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vlse16_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2_t test_vlse16_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vlse16_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m4_t test_vlse16_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m4_tumu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vlse16_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m8_t test_vlse16_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vlse16_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4_t test_vlse16_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vlse16_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2_t test_vlse16_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vlse16_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1_t test_vlse16_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vlse16_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2_t test_vlse16_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vlse16_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m4_t test_vlse16_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vlse16_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m8_t test_vlse16_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vlse16_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4_t test_vlse16_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vlse16_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2_t test_vlse16_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vlse16_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1_t test_vlse16_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vlse16_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2_t test_vlse16_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vlse16_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m4_t test_vlse16_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vlse16_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m8_t test_vlse16_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_f16m8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vlse16_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4_t test_vlse16_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vlse16_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2_t test_vlse16_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vlse16_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1_t test_vlse16_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vlse16_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2_t test_vlse16_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vlse16_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m4_t test_vlse16_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m4_mu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vlse16_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m8_t test_vlse16_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_i16m8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vlse16_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4_t test_vlse16_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vlse16_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2_t test_vlse16_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vlse16_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1_t test_vlse16_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vlse16_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2_t test_vlse16_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vlse16_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m4_t test_vlse16_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m4_mu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vlse16_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m8_t test_vlse16_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_u16m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlse32.c b/auto-generated/policy_funcs/llvm-api-tests/vlse32.c index 89a0360f8..92f436fec 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlse32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlse32.c @@ -6,242 +6,347 @@ #include -vfloat32mf2_t test_vlse32_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2_t test_vlse32_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vlse32_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1_t test_vlse32_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vlse32_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2_t test_vlse32_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vlse32_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m4_t test_vlse32_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat32m8_t test_vlse32_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m8_t test_vlse32_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_f32m8_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vlse32_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2_t test_vlse32_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vlse32_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1_t test_vlse32_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vlse32_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2_t test_vlse32_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vlse32_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m4_t test_vlse32_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_i32m4_tu(vd, rs1, rs2, vl); } -vint32m8_t test_vlse32_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m8_t test_vlse32_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_i32m8_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vlse32_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2_t test_vlse32_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vlse32_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1_t test_vlse32_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vlse32_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2_t test_vlse32_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vlse32_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m4_t test_vlse32_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint32m8_t test_vlse32_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m8_t test_vlse32_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse32_v_u32m8_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vlse32_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2_t test_vlse32_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vlse32_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1_t test_vlse32_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vlse32_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2_t test_vlse32_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vlse32_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m4_t test_vlse32_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vlse32_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m8_t test_vlse32_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vlse32_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2_t test_vlse32_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vlse32_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1_t test_vlse32_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vlse32_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2_t test_vlse32_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vlse32_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m4_t test_vlse32_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vlse32_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m8_t test_vlse32_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vlse32_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2_t test_vlse32_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vlse32_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1_t test_vlse32_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vlse32_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2_t test_vlse32_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vlse32_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m4_t test_vlse32_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vlse32_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m8_t test_vlse32_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vlse32_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2_t test_vlse32_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vlse32_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1_t test_vlse32_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vlse32_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2_t test_vlse32_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vlse32_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m4_t test_vlse32_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vlse32_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m8_t test_vlse32_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vlse32_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2_t test_vlse32_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vlse32_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1_t test_vlse32_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vlse32_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2_t test_vlse32_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vlse32_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m4_t test_vlse32_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vlse32_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m8_t test_vlse32_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vlse32_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2_t test_vlse32_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vlse32_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1_t test_vlse32_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vlse32_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2_t test_vlse32_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vlse32_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m4_t test_vlse32_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vlse32_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m8_t test_vlse32_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vlse32_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2_t test_vlse32_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vlse32_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1_t test_vlse32_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vlse32_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2_t test_vlse32_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vlse32_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m4_t test_vlse32_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vlse32_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m8_t test_vlse32_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_f32m8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vlse32_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2_t test_vlse32_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vlse32_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1_t test_vlse32_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vlse32_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2_t test_vlse32_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vlse32_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m4_t test_vlse32_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vlse32_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m8_t test_vlse32_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_i32m8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vlse32_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2_t test_vlse32_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vlse32_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1_t test_vlse32_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vlse32_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2_t test_vlse32_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vlse32_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m4_t test_vlse32_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vlse32_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m8_t test_vlse32_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse32_v_u32m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlse64.c b/auto-generated/policy_funcs/llvm-api-tests/vlse64.c index eb0089c38..9f84d0720 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlse64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlse64.c @@ -6,194 +6,278 @@ #include -vfloat64m1_t test_vlse64_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1_t test_vlse64_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vlse64_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2_t test_vlse64_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vlse64_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m4_t test_vlse64_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vlse64_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m8_t test_vlse64_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_f64m8_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vlse64_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1_t test_vlse64_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vlse64_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2_t test_vlse64_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vlse64_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m4_t test_vlse64_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vlse64_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m8_t test_vlse64_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vlse64_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1_t test_vlse64_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vlse64_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2_t test_vlse64_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vlse64_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m4_t test_vlse64_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vlse64_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m8_t test_vlse64_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse64_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vlse64_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1_t test_vlse64_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vlse64_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2_t test_vlse64_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vlse64_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m4_t test_vlse64_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vlse64_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m8_t test_vlse64_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vlse64_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1_t test_vlse64_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vlse64_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2_t test_vlse64_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vlse64_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m4_t test_vlse64_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vlse64_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m8_t test_vlse64_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vlse64_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1_t test_vlse64_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vlse64_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2_t test_vlse64_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vlse64_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m4_t test_vlse64_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vlse64_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m8_t test_vlse64_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vlse64_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1_t test_vlse64_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vlse64_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2_t test_vlse64_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vlse64_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m4_t test_vlse64_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vlse64_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m8_t test_vlse64_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vlse64_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1_t test_vlse64_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vlse64_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2_t test_vlse64_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vlse64_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m4_t test_vlse64_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vlse64_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m8_t test_vlse64_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vlse64_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1_t test_vlse64_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vlse64_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2_t test_vlse64_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vlse64_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m4_t test_vlse64_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vlse64_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m8_t test_vlse64_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vlse64_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1_t test_vlse64_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vlse64_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2_t test_vlse64_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vlse64_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m4_t test_vlse64_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vlse64_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m8_t test_vlse64_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vlse64_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1_t test_vlse64_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vlse64_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2_t test_vlse64_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vlse64_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m4_t test_vlse64_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vlse64_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m8_t test_vlse64_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vlse64_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1_t test_vlse64_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vlse64_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2_t test_vlse64_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vlse64_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m4_t test_vlse64_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vlse64_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m8_t test_vlse64_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse64_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlse8.c b/auto-generated/policy_funcs/llvm-api-tests/vlse8.c index 87f8dbe1e..3c5c98d3b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlse8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlse8.c @@ -5,226 +5,298 @@ #include -vint8mf8_t test_vlse8_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8_t test_vlse8_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vlse8_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4_t test_vlse8_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vlse8_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2_t test_vlse8_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vlse8_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1_t test_vlse8_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_i8m1_tu(vd, rs1, rs2, vl); } -vint8m2_t test_vlse8_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2_t test_vlse8_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_i8m2_tu(vd, rs1, rs2, vl); } -vint8m4_t test_vlse8_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m4_t test_vlse8_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_i8m4_tu(vd, rs1, rs2, vl); } -vint8m8_t test_vlse8_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m8_t test_vlse8_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_i8m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vlse8_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8_t test_vlse8_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vlse8_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4_t test_vlse8_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vlse8_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2_t test_vlse8_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vlse8_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1_t test_vlse8_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint8m2_t test_vlse8_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2_t test_vlse8_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m2_tu(vd, rs1, rs2, vl); } -vuint8m4_t test_vlse8_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m4_t test_vlse8_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m4_tu(vd, rs1, rs2, vl); } -vuint8m8_t test_vlse8_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m8_t test_vlse8_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vlse8_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8_t test_vlse8_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vlse8_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4_t test_vlse8_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vlse8_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2_t test_vlse8_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vlse8_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1_t test_vlse8_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vlse8_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2_t test_vlse8_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m2_tum(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vlse8_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m4_t test_vlse8_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m4_tum(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vlse8_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m8_t test_vlse8_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vlse8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8_t test_vlse8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vlse8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4_t test_vlse8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vlse8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2_t test_vlse8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vlse8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1_t test_vlse8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vlse8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2_t test_vlse8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vlse8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m4_t test_vlse8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m4_tum(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vlse8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m8_t test_vlse8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vlse8_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8_t test_vlse8_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vlse8_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4_t test_vlse8_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vlse8_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2_t test_vlse8_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vlse8_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1_t test_vlse8_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vlse8_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2_t test_vlse8_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vlse8_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m4_t test_vlse8_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m4_tumu(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vlse8_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m8_t test_vlse8_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vlse8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8_t test_vlse8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vlse8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4_t test_vlse8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vlse8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2_t test_vlse8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vlse8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1_t test_vlse8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vlse8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2_t test_vlse8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8m2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vlse8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m4_t test_vlse8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8m4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vlse8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m8_t test_vlse8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vlse8_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8_t test_vlse8_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vlse8_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4_t test_vlse8_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vlse8_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2_t test_vlse8_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vlse8_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1_t test_vlse8_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vlse8_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2_t test_vlse8_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m2_mu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vlse8_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m4_t test_vlse8_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m4_mu(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vlse8_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m8_t test_vlse8_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_i8m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vlse8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8_t test_vlse8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vlse8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4_t test_vlse8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vlse8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2_t test_vlse8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse8_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vlse8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1_t test_vlse8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vlse8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2_t test_vlse8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vlse8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m4_t test_vlse8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m4_mu(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vlse8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m8_t test_vlse8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse8_v_u8m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c index 7f800f965..ae4ef1668 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c @@ -6,242 +6,302 @@ #include -vfloat16mf4x2_t test_vlseg2e16_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x2_t test_vlseg2e16_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16mf4x2_tu(vd, rs1, vl); } -vfloat16mf2x2_t test_vlseg2e16_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x2_t test_vlseg2e16_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16mf2x2_tu(vd, rs1, vl); } -vfloat16m1x2_t test_vlseg2e16_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x2_t test_vlseg2e16_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m1x2_tu(vd, rs1, vl); } -vfloat16m2x2_t test_vlseg2e16_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x2_t test_vlseg2e16_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m2x2_tu(vd, rs1, vl); } -vfloat16m4x2_t test_vlseg2e16_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m4x2_t test_vlseg2e16_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m4x2_tu(vd, rs1, vl); } -vint16mf4x2_t test_vlseg2e16_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x2_t test_vlseg2e16_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg2e16_v_i16mf4x2_tu(vd, rs1, vl); } -vint16mf2x2_t test_vlseg2e16_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x2_t test_vlseg2e16_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg2e16_v_i16mf2x2_tu(vd, rs1, vl); } -vint16m1x2_t test_vlseg2e16_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, size_t vl) { +vint16m1x2_t test_vlseg2e16_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg2e16_v_i16m1x2_tu(vd, rs1, vl); } -vint16m2x2_t test_vlseg2e16_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, size_t vl) { +vint16m2x2_t test_vlseg2e16_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg2e16_v_i16m2x2_tu(vd, rs1, vl); } -vint16m4x2_t test_vlseg2e16_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, size_t vl) { +vint16m4x2_t test_vlseg2e16_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg2e16_v_i16m4x2_tu(vd, rs1, vl); } -vuint16mf4x2_t test_vlseg2e16_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x2_t test_vlseg2e16_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16mf4x2_tu(vd, rs1, vl); } -vuint16mf2x2_t test_vlseg2e16_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x2_t test_vlseg2e16_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16mf2x2_tu(vd, rs1, vl); } -vuint16m1x2_t test_vlseg2e16_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x2_t test_vlseg2e16_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg2e16_v_u16m1x2_tu(vd, rs1, vl); } -vuint16m2x2_t test_vlseg2e16_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x2_t test_vlseg2e16_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg2e16_v_u16m2x2_tu(vd, rs1, vl); } -vuint16m4x2_t test_vlseg2e16_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m4x2_t test_vlseg2e16_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg2e16_v_u16m4x2_tu(vd, rs1, vl); } -vfloat16mf4x2_t test_vlseg2e16_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x2_t test_vlseg2e16_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16mf4x2_tum(vm, vd, rs1, vl); } -vfloat16mf2x2_t test_vlseg2e16_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x2_t test_vlseg2e16_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16mf2x2_tum(vm, vd, rs1, vl); } -vfloat16m1x2_t test_vlseg2e16_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x2_t test_vlseg2e16_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m1x2_tum(vm, vd, rs1, vl); } -vfloat16m2x2_t test_vlseg2e16_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x2_t test_vlseg2e16_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m2x2_tum(vm, vd, rs1, vl); } -vfloat16m4x2_t test_vlseg2e16_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m4x2_t test_vlseg2e16_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m4x2_tum(vm, vd, rs1, vl); } -vint16mf4x2_t test_vlseg2e16_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x2_t test_vlseg2e16_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16mf4x2_tum(vm, vd, rs1, vl); } -vint16mf2x2_t test_vlseg2e16_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x2_t test_vlseg2e16_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16mf2x2_tum(vm, vd, rs1, vl); } -vint16m1x2_t test_vlseg2e16_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, size_t vl) { +vint16m1x2_t test_vlseg2e16_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m1x2_tum(vm, vd, rs1, vl); } -vint16m2x2_t test_vlseg2e16_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, size_t vl) { +vint16m2x2_t test_vlseg2e16_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m2x2_tum(vm, vd, rs1, vl); } -vint16m4x2_t test_vlseg2e16_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, size_t vl) { +vint16m4x2_t test_vlseg2e16_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m4x2_tum(vm, vd, rs1, vl); } -vuint16mf4x2_t test_vlseg2e16_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x2_t test_vlseg2e16_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16mf4x2_tum(vm, vd, rs1, vl); } -vuint16mf2x2_t test_vlseg2e16_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x2_t test_vlseg2e16_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16mf2x2_tum(vm, vd, rs1, vl); } -vuint16m1x2_t test_vlseg2e16_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x2_t test_vlseg2e16_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m1x2_tum(vm, vd, rs1, vl); } -vuint16m2x2_t test_vlseg2e16_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x2_t test_vlseg2e16_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m2x2_tum(vm, vd, rs1, vl); } -vuint16m4x2_t test_vlseg2e16_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m4x2_t test_vlseg2e16_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m4x2_tum(vm, vd, rs1, vl); } -vfloat16mf4x2_t test_vlseg2e16_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x2_t test_vlseg2e16_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16mf4x2_tumu(vm, vd, rs1, vl); } -vfloat16mf2x2_t test_vlseg2e16_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x2_t test_vlseg2e16_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16mf2x2_tumu(vm, vd, rs1, vl); } -vfloat16m1x2_t test_vlseg2e16_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x2_t test_vlseg2e16_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m1x2_tumu(vm, vd, rs1, vl); } -vfloat16m2x2_t test_vlseg2e16_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x2_t test_vlseg2e16_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m2x2_tumu(vm, vd, rs1, vl); } -vfloat16m4x2_t test_vlseg2e16_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m4x2_t test_vlseg2e16_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m4x2_tumu(vm, vd, rs1, vl); } -vint16mf4x2_t test_vlseg2e16_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x2_t test_vlseg2e16_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16mf4x2_tumu(vm, vd, rs1, vl); } -vint16mf2x2_t test_vlseg2e16_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x2_t test_vlseg2e16_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16mf2x2_tumu(vm, vd, rs1, vl); } -vint16m1x2_t test_vlseg2e16_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, size_t vl) { +vint16m1x2_t test_vlseg2e16_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m1x2_tumu(vm, vd, rs1, vl); } -vint16m2x2_t test_vlseg2e16_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, size_t vl) { +vint16m2x2_t test_vlseg2e16_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m2x2_tumu(vm, vd, rs1, vl); } -vint16m4x2_t test_vlseg2e16_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, size_t vl) { +vint16m4x2_t test_vlseg2e16_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m4x2_tumu(vm, vd, rs1, vl); } -vuint16mf4x2_t test_vlseg2e16_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x2_t test_vlseg2e16_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16mf4x2_tumu(vm, vd, rs1, vl); } -vuint16mf2x2_t test_vlseg2e16_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x2_t test_vlseg2e16_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16mf2x2_tumu(vm, vd, rs1, vl); } -vuint16m1x2_t test_vlseg2e16_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x2_t test_vlseg2e16_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m1x2_tumu(vm, vd, rs1, vl); } -vuint16m2x2_t test_vlseg2e16_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x2_t test_vlseg2e16_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m2x2_tumu(vm, vd, rs1, vl); } -vuint16m4x2_t test_vlseg2e16_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m4x2_t test_vlseg2e16_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m4x2_tumu(vm, vd, rs1, vl); } -vfloat16mf4x2_t test_vlseg2e16_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x2_t test_vlseg2e16_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16mf4x2_mu(vm, vd, rs1, vl); } -vfloat16mf2x2_t test_vlseg2e16_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x2_t test_vlseg2e16_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16mf2x2_mu(vm, vd, rs1, vl); } -vfloat16m1x2_t test_vlseg2e16_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x2_t test_vlseg2e16_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m1x2_mu(vm, vd, rs1, vl); } -vfloat16m2x2_t test_vlseg2e16_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x2_t test_vlseg2e16_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m2x2_mu(vm, vd, rs1, vl); } -vfloat16m4x2_t test_vlseg2e16_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m4x2_t test_vlseg2e16_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_f16m4x2_mu(vm, vd, rs1, vl); } -vint16mf4x2_t test_vlseg2e16_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x2_t test_vlseg2e16_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16mf4x2_mu(vm, vd, rs1, vl); } -vint16mf2x2_t test_vlseg2e16_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x2_t test_vlseg2e16_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16mf2x2_mu(vm, vd, rs1, vl); } -vint16m1x2_t test_vlseg2e16_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, size_t vl) { +vint16m1x2_t test_vlseg2e16_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m1x2_mu(vm, vd, rs1, vl); } -vint16m2x2_t test_vlseg2e16_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, size_t vl) { +vint16m2x2_t test_vlseg2e16_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m2x2_mu(vm, vd, rs1, vl); } -vint16m4x2_t test_vlseg2e16_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, size_t vl) { +vint16m4x2_t test_vlseg2e16_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_i16m4x2_mu(vm, vd, rs1, vl); } -vuint16mf4x2_t test_vlseg2e16_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x2_t test_vlseg2e16_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16mf4x2_mu(vm, vd, rs1, vl); } -vuint16mf2x2_t test_vlseg2e16_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x2_t test_vlseg2e16_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16mf2x2_mu(vm, vd, rs1, vl); } -vuint16m1x2_t test_vlseg2e16_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x2_t test_vlseg2e16_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m1x2_mu(vm, vd, rs1, vl); } -vuint16m2x2_t test_vlseg2e16_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x2_t test_vlseg2e16_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m2x2_mu(vm, vd, rs1, vl); } -vuint16m4x2_t test_vlseg2e16_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, size_t vl) { +vuint16m4x2_t test_vlseg2e16_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg2e16_v_u16m4x2_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c index 17faf1c10..c2fbe42a3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c @@ -6,242 +6,363 @@ #include -vfloat16mf4x2_t test_vlseg2e16ff_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x2_t test_vlseg2e16ff_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16mf4x2_tu(vd, rs1, new_vl, vl); } -vfloat16mf2x2_t test_vlseg2e16ff_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x2_t test_vlseg2e16ff_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16mf2x2_tu(vd, rs1, new_vl, vl); } -vfloat16m1x2_t test_vlseg2e16ff_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x2_t test_vlseg2e16ff_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m1x2_tu(vd, rs1, new_vl, vl); } -vfloat16m2x2_t test_vlseg2e16ff_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x2_t test_vlseg2e16ff_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m2x2_tu(vd, rs1, new_vl, vl); } -vfloat16m4x2_t test_vlseg2e16ff_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m4x2_t test_vlseg2e16ff_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m4x2_tu(vd, rs1, new_vl, vl); } -vint16mf4x2_t test_vlseg2e16ff_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x2_t test_vlseg2e16ff_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16mf4x2_tu(vd, rs1, new_vl, vl); } -vint16mf2x2_t test_vlseg2e16ff_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x2_t test_vlseg2e16ff_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16mf2x2_tu(vd, rs1, new_vl, vl); } -vint16m1x2_t test_vlseg2e16ff_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x2_t test_vlseg2e16ff_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_i16m1x2_tu(vd, rs1, new_vl, vl); } -vint16m2x2_t test_vlseg2e16ff_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x2_t test_vlseg2e16ff_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_i16m2x2_tu(vd, rs1, new_vl, vl); } -vint16m4x2_t test_vlseg2e16ff_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m4x2_t test_vlseg2e16ff_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_i16m4x2_tu(vd, rs1, new_vl, vl); } -vuint16mf4x2_t test_vlseg2e16ff_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x2_t test_vlseg2e16ff_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16mf4x2_tu(vd, rs1, new_vl, vl); } -vuint16mf2x2_t test_vlseg2e16ff_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x2_t test_vlseg2e16ff_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16mf2x2_tu(vd, rs1, new_vl, vl); } -vuint16m1x2_t test_vlseg2e16ff_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x2_t test_vlseg2e16ff_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_u16m1x2_tu(vd, rs1, new_vl, vl); } -vuint16m2x2_t test_vlseg2e16ff_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x2_t test_vlseg2e16ff_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_u16m2x2_tu(vd, rs1, new_vl, vl); } -vuint16m4x2_t test_vlseg2e16ff_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m4x2_t test_vlseg2e16ff_v_u16m4x2_tu(vuint16m4x2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_u16m4x2_tu(vd, rs1, new_vl, vl); } -vfloat16mf4x2_t test_vlseg2e16ff_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x2_t test_vlseg2e16ff_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16mf4x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x2_t test_vlseg2e16ff_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x2_t test_vlseg2e16ff_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m1x2_t test_vlseg2e16ff_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x2_t test_vlseg2e16ff_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m1x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m2x2_t test_vlseg2e16ff_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x2_t test_vlseg2e16ff_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m2x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m4x2_t test_vlseg2e16ff_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m4x2_t test_vlseg2e16ff_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m4x2_tum(vm, vd, rs1, new_vl, vl); } -vint16mf4x2_t test_vlseg2e16ff_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x2_t test_vlseg2e16ff_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_i16mf4x2_tum(vm, vd, rs1, new_vl, vl); } -vint16mf2x2_t test_vlseg2e16ff_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x2_t test_vlseg2e16ff_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_i16mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vint16m1x2_t test_vlseg2e16ff_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x2_t test_vlseg2e16ff_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m1x2_tum(vm, vd, rs1, new_vl, vl); } -vint16m2x2_t test_vlseg2e16ff_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x2_t test_vlseg2e16ff_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m2x2_tum(vm, vd, rs1, new_vl, vl); } -vint16m4x2_t test_vlseg2e16ff_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m4x2_t test_vlseg2e16ff_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m4x2_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf4x2_t test_vlseg2e16ff_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x2_t test_vlseg2e16ff_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16mf4x2_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf2x2_t test_vlseg2e16ff_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x2_t test_vlseg2e16ff_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vuint16m1x2_t test_vlseg2e16ff_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x2_t test_vlseg2e16ff_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16m1x2_tum(vm, vd, rs1, new_vl, vl); } -vuint16m2x2_t test_vlseg2e16ff_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x2_t test_vlseg2e16ff_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16m2x2_tum(vm, vd, rs1, new_vl, vl); } -vuint16m4x2_t test_vlseg2e16ff_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m4x2_t test_vlseg2e16ff_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16m4x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x2_t test_vlseg2e16ff_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x2_t test_vlseg2e16ff_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16mf4x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x2_t test_vlseg2e16ff_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x2_t test_vlseg2e16ff_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x2_t test_vlseg2e16ff_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x2_t test_vlseg2e16ff_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m2x2_t test_vlseg2e16ff_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x2_t test_vlseg2e16ff_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m4x2_t test_vlseg2e16ff_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m4x2_t test_vlseg2e16ff_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf4x2_t test_vlseg2e16ff_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x2_t test_vlseg2e16ff_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_i16mf4x2_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf2x2_t test_vlseg2e16ff_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x2_t test_vlseg2e16ff_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_i16mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vint16m1x2_t test_vlseg2e16ff_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x2_t test_vlseg2e16ff_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vint16m2x2_t test_vlseg2e16ff_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x2_t test_vlseg2e16ff_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vint16m4x2_t test_vlseg2e16ff_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m4x2_t test_vlseg2e16ff_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x2_t test_vlseg2e16ff_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x2_t test_vlseg2e16ff_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16mf4x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x2_t test_vlseg2e16ff_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x2_t test_vlseg2e16ff_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m1x2_t test_vlseg2e16ff_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x2_t test_vlseg2e16ff_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m2x2_t test_vlseg2e16ff_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x2_t test_vlseg2e16ff_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m4x2_t test_vlseg2e16ff_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m4x2_t test_vlseg2e16ff_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x2_t test_vlseg2e16ff_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x2_t test_vlseg2e16ff_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16mf4x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x2_t test_vlseg2e16ff_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x2_t test_vlseg2e16ff_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x2_t test_vlseg2e16ff_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x2_t test_vlseg2e16ff_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m1x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m2x2_t test_vlseg2e16ff_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x2_t test_vlseg2e16ff_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m2x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m4x2_t test_vlseg2e16ff_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m4x2_t test_vlseg2e16ff_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_f16m4x2_mu(vm, vd, rs1, new_vl, vl); } -vint16mf4x2_t test_vlseg2e16ff_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x2_t test_vlseg2e16ff_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16mf4x2_mu(vm, vd, rs1, new_vl, vl); } -vint16mf2x2_t test_vlseg2e16ff_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x2_t test_vlseg2e16ff_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vint16m1x2_t test_vlseg2e16ff_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x2_t test_vlseg2e16ff_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m1x2_mu(vm, vd, rs1, new_vl, vl); } -vint16m2x2_t test_vlseg2e16ff_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x2_t test_vlseg2e16ff_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m2x2_mu(vm, vd, rs1, new_vl, vl); } -vint16m4x2_t test_vlseg2e16ff_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m4x2_t test_vlseg2e16ff_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_i16m4x2_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x2_t test_vlseg2e16ff_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x2_t test_vlseg2e16ff_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16mf4x2_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x2_t test_vlseg2e16ff_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x2_t test_vlseg2e16ff_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_u16mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vuint16m1x2_t test_vlseg2e16ff_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x2_t test_vlseg2e16ff_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_u16m1x2_mu(vm, vd, rs1, new_vl, vl); } -vuint16m2x2_t test_vlseg2e16ff_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x2_t test_vlseg2e16ff_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_u16m2x2_mu(vm, vd, rs1, new_vl, vl); } -vuint16m4x2_t test_vlseg2e16ff_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m4x2_t test_vlseg2e16ff_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e16ff_v_u16m4x2_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c index cc56bd010..a7a18aeb4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c @@ -6,194 +6,242 @@ #include -vfloat32mf2x2_t test_vlseg2e32_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, size_t vl) { +vfloat32mf2x2_t test_vlseg2e32_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32mf2x2_tu(vd, rs1, vl); } -vfloat32m1x2_t test_vlseg2e32_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, size_t vl) { +vfloat32m1x2_t test_vlseg2e32_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg2e32_v_f32m1x2_tu(vd, rs1, vl); } -vfloat32m2x2_t test_vlseg2e32_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, size_t vl) { +vfloat32m2x2_t test_vlseg2e32_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg2e32_v_f32m2x2_tu(vd, rs1, vl); } -vfloat32m4x2_t test_vlseg2e32_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, size_t vl) { +vfloat32m4x2_t test_vlseg2e32_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg2e32_v_f32m4x2_tu(vd, rs1, vl); } -vint32mf2x2_t test_vlseg2e32_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x2_t test_vlseg2e32_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg2e32_v_i32mf2x2_tu(vd, rs1, vl); } -vint32m1x2_t test_vlseg2e32_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, size_t vl) { +vint32m1x2_t test_vlseg2e32_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg2e32_v_i32m1x2_tu(vd, rs1, vl); } -vint32m2x2_t test_vlseg2e32_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, size_t vl) { +vint32m2x2_t test_vlseg2e32_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg2e32_v_i32m2x2_tu(vd, rs1, vl); } -vint32m4x2_t test_vlseg2e32_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, size_t vl) { +vint32m4x2_t test_vlseg2e32_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg2e32_v_i32m4x2_tu(vd, rs1, vl); } -vuint32mf2x2_t test_vlseg2e32_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x2_t test_vlseg2e32_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32mf2x2_tu(vd, rs1, vl); } -vuint32m1x2_t test_vlseg2e32_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x2_t test_vlseg2e32_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg2e32_v_u32m1x2_tu(vd, rs1, vl); } -vuint32m2x2_t test_vlseg2e32_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x2_t test_vlseg2e32_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg2e32_v_u32m2x2_tu(vd, rs1, vl); } -vuint32m4x2_t test_vlseg2e32_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m4x2_t test_vlseg2e32_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg2e32_v_u32m4x2_tu(vd, rs1, vl); } -vfloat32mf2x2_t test_vlseg2e32_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, size_t vl) { +vfloat32mf2x2_t test_vlseg2e32_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32mf2x2_tum(vm, vd, rs1, vl); } -vfloat32m1x2_t test_vlseg2e32_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, size_t vl) { +vfloat32m1x2_t test_vlseg2e32_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m1x2_tum(vm, vd, rs1, vl); } -vfloat32m2x2_t test_vlseg2e32_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, size_t vl) { +vfloat32m2x2_t test_vlseg2e32_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m2x2_tum(vm, vd, rs1, vl); } -vfloat32m4x2_t test_vlseg2e32_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, size_t vl) { +vfloat32m4x2_t test_vlseg2e32_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m4x2_tum(vm, vd, rs1, vl); } -vint32mf2x2_t test_vlseg2e32_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x2_t test_vlseg2e32_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32mf2x2_tum(vm, vd, rs1, vl); } -vint32m1x2_t test_vlseg2e32_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, size_t vl) { +vint32m1x2_t test_vlseg2e32_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m1x2_tum(vm, vd, rs1, vl); } -vint32m2x2_t test_vlseg2e32_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, size_t vl) { +vint32m2x2_t test_vlseg2e32_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m2x2_tum(vm, vd, rs1, vl); } -vint32m4x2_t test_vlseg2e32_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, size_t vl) { +vint32m4x2_t test_vlseg2e32_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m4x2_tum(vm, vd, rs1, vl); } -vuint32mf2x2_t test_vlseg2e32_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x2_t test_vlseg2e32_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32mf2x2_tum(vm, vd, rs1, vl); } -vuint32m1x2_t test_vlseg2e32_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x2_t test_vlseg2e32_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m1x2_tum(vm, vd, rs1, vl); } -vuint32m2x2_t test_vlseg2e32_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x2_t test_vlseg2e32_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m2x2_tum(vm, vd, rs1, vl); } -vuint32m4x2_t test_vlseg2e32_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m4x2_t test_vlseg2e32_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m4x2_tum(vm, vd, rs1, vl); } -vfloat32mf2x2_t test_vlseg2e32_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, size_t vl) { +vfloat32mf2x2_t test_vlseg2e32_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32mf2x2_tumu(vm, vd, rs1, vl); } -vfloat32m1x2_t test_vlseg2e32_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, size_t vl) { +vfloat32m1x2_t test_vlseg2e32_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m1x2_tumu(vm, vd, rs1, vl); } -vfloat32m2x2_t test_vlseg2e32_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, size_t vl) { +vfloat32m2x2_t test_vlseg2e32_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m2x2_tumu(vm, vd, rs1, vl); } -vfloat32m4x2_t test_vlseg2e32_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, size_t vl) { +vfloat32m4x2_t test_vlseg2e32_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m4x2_tumu(vm, vd, rs1, vl); } -vint32mf2x2_t test_vlseg2e32_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x2_t test_vlseg2e32_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32mf2x2_tumu(vm, vd, rs1, vl); } -vint32m1x2_t test_vlseg2e32_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, size_t vl) { +vint32m1x2_t test_vlseg2e32_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m1x2_tumu(vm, vd, rs1, vl); } -vint32m2x2_t test_vlseg2e32_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, size_t vl) { +vint32m2x2_t test_vlseg2e32_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m2x2_tumu(vm, vd, rs1, vl); } -vint32m4x2_t test_vlseg2e32_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, size_t vl) { +vint32m4x2_t test_vlseg2e32_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m4x2_tumu(vm, vd, rs1, vl); } -vuint32mf2x2_t test_vlseg2e32_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x2_t test_vlseg2e32_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32mf2x2_tumu(vm, vd, rs1, vl); } -vuint32m1x2_t test_vlseg2e32_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x2_t test_vlseg2e32_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m1x2_tumu(vm, vd, rs1, vl); } -vuint32m2x2_t test_vlseg2e32_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x2_t test_vlseg2e32_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m2x2_tumu(vm, vd, rs1, vl); } -vuint32m4x2_t test_vlseg2e32_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m4x2_t test_vlseg2e32_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m4x2_tumu(vm, vd, rs1, vl); } -vfloat32mf2x2_t test_vlseg2e32_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, size_t vl) { +vfloat32mf2x2_t test_vlseg2e32_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32mf2x2_mu(vm, vd, rs1, vl); } -vfloat32m1x2_t test_vlseg2e32_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, size_t vl) { +vfloat32m1x2_t test_vlseg2e32_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m1x2_mu(vm, vd, rs1, vl); } -vfloat32m2x2_t test_vlseg2e32_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, size_t vl) { +vfloat32m2x2_t test_vlseg2e32_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m2x2_mu(vm, vd, rs1, vl); } -vfloat32m4x2_t test_vlseg2e32_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, size_t vl) { +vfloat32m4x2_t test_vlseg2e32_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg2e32_v_f32m4x2_mu(vm, vd, rs1, vl); } -vint32mf2x2_t test_vlseg2e32_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x2_t test_vlseg2e32_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32mf2x2_mu(vm, vd, rs1, vl); } -vint32m1x2_t test_vlseg2e32_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, size_t vl) { +vint32m1x2_t test_vlseg2e32_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m1x2_mu(vm, vd, rs1, vl); } -vint32m2x2_t test_vlseg2e32_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, size_t vl) { +vint32m2x2_t test_vlseg2e32_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m2x2_mu(vm, vd, rs1, vl); } -vint32m4x2_t test_vlseg2e32_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, size_t vl) { +vint32m4x2_t test_vlseg2e32_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_i32m4x2_mu(vm, vd, rs1, vl); } -vuint32mf2x2_t test_vlseg2e32_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x2_t test_vlseg2e32_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32mf2x2_mu(vm, vd, rs1, vl); } -vuint32m1x2_t test_vlseg2e32_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x2_t test_vlseg2e32_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m1x2_mu(vm, vd, rs1, vl); } -vuint32m2x2_t test_vlseg2e32_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x2_t test_vlseg2e32_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m2x2_mu(vm, vd, rs1, vl); } -vuint32m4x2_t test_vlseg2e32_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, size_t vl) { +vuint32m4x2_t test_vlseg2e32_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg2e32_v_u32m4x2_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c index 1b7495dfc..e7ecbc469 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c @@ -6,194 +6,289 @@ #include -vfloat32mf2x2_t test_vlseg2e32ff_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x2_t test_vlseg2e32ff_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32mf2x2_tu(vd, rs1, new_vl, vl); } -vfloat32m1x2_t test_vlseg2e32ff_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x2_t test_vlseg2e32ff_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m1x2_tu(vd, rs1, new_vl, vl); } -vfloat32m2x2_t test_vlseg2e32ff_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x2_t test_vlseg2e32ff_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m2x2_tu(vd, rs1, new_vl, vl); } -vfloat32m4x2_t test_vlseg2e32ff_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m4x2_t test_vlseg2e32ff_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m4x2_tu(vd, rs1, new_vl, vl); } -vint32mf2x2_t test_vlseg2e32ff_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x2_t test_vlseg2e32ff_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32mf2x2_tu(vd, rs1, new_vl, vl); } -vint32m1x2_t test_vlseg2e32ff_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x2_t test_vlseg2e32ff_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_i32m1x2_tu(vd, rs1, new_vl, vl); } -vint32m2x2_t test_vlseg2e32ff_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x2_t test_vlseg2e32ff_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_i32m2x2_tu(vd, rs1, new_vl, vl); } -vint32m4x2_t test_vlseg2e32ff_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m4x2_t test_vlseg2e32ff_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_i32m4x2_tu(vd, rs1, new_vl, vl); } -vuint32mf2x2_t test_vlseg2e32ff_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x2_t test_vlseg2e32ff_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32mf2x2_tu(vd, rs1, new_vl, vl); } -vuint32m1x2_t test_vlseg2e32ff_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x2_t test_vlseg2e32ff_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_u32m1x2_tu(vd, rs1, new_vl, vl); } -vuint32m2x2_t test_vlseg2e32ff_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x2_t test_vlseg2e32ff_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_u32m2x2_tu(vd, rs1, new_vl, vl); } -vuint32m4x2_t test_vlseg2e32ff_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m4x2_t test_vlseg2e32ff_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_u32m4x2_tu(vd, rs1, new_vl, vl); } -vfloat32mf2x2_t test_vlseg2e32ff_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x2_t test_vlseg2e32ff_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_f32mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m1x2_t test_vlseg2e32ff_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x2_t test_vlseg2e32ff_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m1x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m2x2_t test_vlseg2e32ff_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x2_t test_vlseg2e32ff_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m2x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m4x2_t test_vlseg2e32ff_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m4x2_t test_vlseg2e32ff_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m4x2_tum(vm, vd, rs1, new_vl, vl); } -vint32mf2x2_t test_vlseg2e32ff_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x2_t test_vlseg2e32ff_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_i32mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vint32m1x2_t test_vlseg2e32ff_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x2_t test_vlseg2e32ff_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m1x2_tum(vm, vd, rs1, new_vl, vl); } -vint32m2x2_t test_vlseg2e32ff_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x2_t test_vlseg2e32ff_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m2x2_tum(vm, vd, rs1, new_vl, vl); } -vint32m4x2_t test_vlseg2e32ff_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m4x2_t test_vlseg2e32ff_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m4x2_tum(vm, vd, rs1, new_vl, vl); } -vuint32mf2x2_t test_vlseg2e32ff_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x2_t test_vlseg2e32ff_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vuint32m1x2_t test_vlseg2e32ff_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x2_t test_vlseg2e32ff_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32m1x2_tum(vm, vd, rs1, new_vl, vl); } -vuint32m2x2_t test_vlseg2e32ff_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x2_t test_vlseg2e32ff_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32m2x2_tum(vm, vd, rs1, new_vl, vl); } -vuint32m4x2_t test_vlseg2e32ff_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m4x2_t test_vlseg2e32ff_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32m4x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x2_t test_vlseg2e32ff_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x2_t test_vlseg2e32ff_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_f32mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x2_t test_vlseg2e32ff_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x2_t test_vlseg2e32ff_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m2x2_t test_vlseg2e32ff_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x2_t test_vlseg2e32ff_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m4x2_t test_vlseg2e32ff_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m4x2_t test_vlseg2e32ff_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vint32mf2x2_t test_vlseg2e32ff_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x2_t test_vlseg2e32ff_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_i32mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vint32m1x2_t test_vlseg2e32ff_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x2_t test_vlseg2e32ff_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vint32m2x2_t test_vlseg2e32ff_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x2_t test_vlseg2e32ff_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vint32m4x2_t test_vlseg2e32ff_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m4x2_t test_vlseg2e32ff_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x2_t test_vlseg2e32ff_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x2_t test_vlseg2e32ff_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m1x2_t test_vlseg2e32ff_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x2_t test_vlseg2e32ff_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m2x2_t test_vlseg2e32ff_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x2_t test_vlseg2e32ff_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m4x2_t test_vlseg2e32ff_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m4x2_t test_vlseg2e32ff_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x2_t test_vlseg2e32ff_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x2_t test_vlseg2e32ff_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x2_t test_vlseg2e32ff_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x2_t test_vlseg2e32ff_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m1x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m2x2_t test_vlseg2e32ff_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x2_t test_vlseg2e32ff_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m2x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m4x2_t test_vlseg2e32ff_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m4x2_t test_vlseg2e32ff_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_f32m4x2_mu(vm, vd, rs1, new_vl, vl); } -vint32mf2x2_t test_vlseg2e32ff_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x2_t test_vlseg2e32ff_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vint32m1x2_t test_vlseg2e32ff_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x2_t test_vlseg2e32ff_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m1x2_mu(vm, vd, rs1, new_vl, vl); } -vint32m2x2_t test_vlseg2e32ff_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x2_t test_vlseg2e32ff_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m2x2_mu(vm, vd, rs1, new_vl, vl); } -vint32m4x2_t test_vlseg2e32ff_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m4x2_t test_vlseg2e32ff_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_i32m4x2_mu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x2_t test_vlseg2e32ff_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x2_t test_vlseg2e32ff_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e32ff_v_u32mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vuint32m1x2_t test_vlseg2e32ff_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x2_t test_vlseg2e32ff_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_u32m1x2_mu(vm, vd, rs1, new_vl, vl); } -vuint32m2x2_t test_vlseg2e32ff_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x2_t test_vlseg2e32ff_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_u32m2x2_mu(vm, vd, rs1, new_vl, vl); } -vuint32m4x2_t test_vlseg2e32ff_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m4x2_t test_vlseg2e32ff_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e32ff_v_u32m4x2_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c index 57bd14ce3..6516085ef 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c @@ -6,146 +6,182 @@ #include -vfloat64m1x2_t test_vlseg2e64_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, size_t vl) { +vfloat64m1x2_t test_vlseg2e64_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg2e64_v_f64m1x2_tu(vd, rs1, vl); } -vfloat64m2x2_t test_vlseg2e64_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, size_t vl) { +vfloat64m2x2_t test_vlseg2e64_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg2e64_v_f64m2x2_tu(vd, rs1, vl); } -vfloat64m4x2_t test_vlseg2e64_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, size_t vl) { +vfloat64m4x2_t test_vlseg2e64_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg2e64_v_f64m4x2_tu(vd, rs1, vl); } -vint64m1x2_t test_vlseg2e64_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, size_t vl) { +vint64m1x2_t test_vlseg2e64_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg2e64_v_i64m1x2_tu(vd, rs1, vl); } -vint64m2x2_t test_vlseg2e64_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, size_t vl) { +vint64m2x2_t test_vlseg2e64_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg2e64_v_i64m2x2_tu(vd, rs1, vl); } -vint64m4x2_t test_vlseg2e64_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, size_t vl) { +vint64m4x2_t test_vlseg2e64_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg2e64_v_i64m4x2_tu(vd, rs1, vl); } -vuint64m1x2_t test_vlseg2e64_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x2_t test_vlseg2e64_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg2e64_v_u64m1x2_tu(vd, rs1, vl); } -vuint64m2x2_t test_vlseg2e64_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x2_t test_vlseg2e64_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg2e64_v_u64m2x2_tu(vd, rs1, vl); } -vuint64m4x2_t test_vlseg2e64_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m4x2_t test_vlseg2e64_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg2e64_v_u64m4x2_tu(vd, rs1, vl); } -vfloat64m1x2_t test_vlseg2e64_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, size_t vl) { +vfloat64m1x2_t test_vlseg2e64_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m1x2_tum(vm, vd, rs1, vl); } -vfloat64m2x2_t test_vlseg2e64_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, size_t vl) { +vfloat64m2x2_t test_vlseg2e64_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m2x2_tum(vm, vd, rs1, vl); } -vfloat64m4x2_t test_vlseg2e64_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, size_t vl) { +vfloat64m4x2_t test_vlseg2e64_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m4x2_tum(vm, vd, rs1, vl); } -vint64m1x2_t test_vlseg2e64_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, size_t vl) { +vint64m1x2_t test_vlseg2e64_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m1x2_tum(vm, vd, rs1, vl); } -vint64m2x2_t test_vlseg2e64_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, size_t vl) { +vint64m2x2_t test_vlseg2e64_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m2x2_tum(vm, vd, rs1, vl); } -vint64m4x2_t test_vlseg2e64_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, size_t vl) { +vint64m4x2_t test_vlseg2e64_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m4x2_tum(vm, vd, rs1, vl); } -vuint64m1x2_t test_vlseg2e64_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x2_t test_vlseg2e64_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m1x2_tum(vm, vd, rs1, vl); } -vuint64m2x2_t test_vlseg2e64_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x2_t test_vlseg2e64_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m2x2_tum(vm, vd, rs1, vl); } -vuint64m4x2_t test_vlseg2e64_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m4x2_t test_vlseg2e64_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m4x2_tum(vm, vd, rs1, vl); } -vfloat64m1x2_t test_vlseg2e64_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, size_t vl) { +vfloat64m1x2_t test_vlseg2e64_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m1x2_tumu(vm, vd, rs1, vl); } -vfloat64m2x2_t test_vlseg2e64_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, size_t vl) { +vfloat64m2x2_t test_vlseg2e64_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m2x2_tumu(vm, vd, rs1, vl); } -vfloat64m4x2_t test_vlseg2e64_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, size_t vl) { +vfloat64m4x2_t test_vlseg2e64_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m4x2_tumu(vm, vd, rs1, vl); } -vint64m1x2_t test_vlseg2e64_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, size_t vl) { +vint64m1x2_t test_vlseg2e64_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m1x2_tumu(vm, vd, rs1, vl); } -vint64m2x2_t test_vlseg2e64_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, size_t vl) { +vint64m2x2_t test_vlseg2e64_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m2x2_tumu(vm, vd, rs1, vl); } -vint64m4x2_t test_vlseg2e64_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, size_t vl) { +vint64m4x2_t test_vlseg2e64_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m4x2_tumu(vm, vd, rs1, vl); } -vuint64m1x2_t test_vlseg2e64_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x2_t test_vlseg2e64_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m1x2_tumu(vm, vd, rs1, vl); } -vuint64m2x2_t test_vlseg2e64_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x2_t test_vlseg2e64_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m2x2_tumu(vm, vd, rs1, vl); } -vuint64m4x2_t test_vlseg2e64_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m4x2_t test_vlseg2e64_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m4x2_tumu(vm, vd, rs1, vl); } -vfloat64m1x2_t test_vlseg2e64_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, size_t vl) { +vfloat64m1x2_t test_vlseg2e64_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m1x2_mu(vm, vd, rs1, vl); } -vfloat64m2x2_t test_vlseg2e64_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, size_t vl) { +vfloat64m2x2_t test_vlseg2e64_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m2x2_mu(vm, vd, rs1, vl); } -vfloat64m4x2_t test_vlseg2e64_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, size_t vl) { +vfloat64m4x2_t test_vlseg2e64_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg2e64_v_f64m4x2_mu(vm, vd, rs1, vl); } -vint64m1x2_t test_vlseg2e64_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, size_t vl) { +vint64m1x2_t test_vlseg2e64_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m1x2_mu(vm, vd, rs1, vl); } -vint64m2x2_t test_vlseg2e64_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, size_t vl) { +vint64m2x2_t test_vlseg2e64_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m2x2_mu(vm, vd, rs1, vl); } -vint64m4x2_t test_vlseg2e64_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, size_t vl) { +vint64m4x2_t test_vlseg2e64_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_i64m4x2_mu(vm, vd, rs1, vl); } -vuint64m1x2_t test_vlseg2e64_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x2_t test_vlseg2e64_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m1x2_mu(vm, vd, rs1, vl); } -vuint64m2x2_t test_vlseg2e64_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x2_t test_vlseg2e64_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m2x2_mu(vm, vd, rs1, vl); } -vuint64m4x2_t test_vlseg2e64_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, size_t vl) { +vuint64m4x2_t test_vlseg2e64_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg2e64_v_u64m4x2_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c index e4b95ef13..a7218bef3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c @@ -6,146 +6,215 @@ #include -vfloat64m1x2_t test_vlseg2e64ff_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x2_t test_vlseg2e64ff_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m1x2_tu(vd, rs1, new_vl, vl); } -vfloat64m2x2_t test_vlseg2e64ff_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x2_t test_vlseg2e64ff_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m2x2_tu(vd, rs1, new_vl, vl); } -vfloat64m4x2_t test_vlseg2e64ff_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m4x2_t test_vlseg2e64ff_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m4x2_tu(vd, rs1, new_vl, vl); } -vint64m1x2_t test_vlseg2e64ff_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x2_t test_vlseg2e64ff_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_i64m1x2_tu(vd, rs1, new_vl, vl); } -vint64m2x2_t test_vlseg2e64ff_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x2_t test_vlseg2e64ff_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_i64m2x2_tu(vd, rs1, new_vl, vl); } -vint64m4x2_t test_vlseg2e64ff_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m4x2_t test_vlseg2e64ff_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_i64m4x2_tu(vd, rs1, new_vl, vl); } -vuint64m1x2_t test_vlseg2e64ff_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x2_t test_vlseg2e64ff_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_u64m1x2_tu(vd, rs1, new_vl, vl); } -vuint64m2x2_t test_vlseg2e64ff_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x2_t test_vlseg2e64ff_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_u64m2x2_tu(vd, rs1, new_vl, vl); } -vuint64m4x2_t test_vlseg2e64ff_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m4x2_t test_vlseg2e64ff_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_u64m4x2_tu(vd, rs1, new_vl, vl); } -vfloat64m1x2_t test_vlseg2e64ff_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x2_t test_vlseg2e64ff_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m1x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m2x2_t test_vlseg2e64ff_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x2_t test_vlseg2e64ff_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m2x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m4x2_t test_vlseg2e64ff_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m4x2_t test_vlseg2e64ff_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m4x2_tum(vm, vd, rs1, new_vl, vl); } -vint64m1x2_t test_vlseg2e64ff_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x2_t test_vlseg2e64ff_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m1x2_tum(vm, vd, rs1, new_vl, vl); } -vint64m2x2_t test_vlseg2e64ff_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x2_t test_vlseg2e64ff_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m2x2_tum(vm, vd, rs1, new_vl, vl); } -vint64m4x2_t test_vlseg2e64ff_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m4x2_t test_vlseg2e64ff_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m4x2_tum(vm, vd, rs1, new_vl, vl); } -vuint64m1x2_t test_vlseg2e64ff_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x2_t test_vlseg2e64ff_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_u64m1x2_tum(vm, vd, rs1, new_vl, vl); } -vuint64m2x2_t test_vlseg2e64ff_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x2_t test_vlseg2e64ff_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_u64m2x2_tum(vm, vd, rs1, new_vl, vl); } -vuint64m4x2_t test_vlseg2e64ff_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m4x2_t test_vlseg2e64ff_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_u64m4x2_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m1x2_t test_vlseg2e64ff_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x2_t test_vlseg2e64ff_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_f64m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m2x2_t test_vlseg2e64ff_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x2_t test_vlseg2e64ff_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_f64m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m4x2_t test_vlseg2e64ff_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m4x2_t test_vlseg2e64ff_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_f64m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vint64m1x2_t test_vlseg2e64ff_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x2_t test_vlseg2e64ff_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vint64m2x2_t test_vlseg2e64ff_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x2_t test_vlseg2e64ff_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vint64m4x2_t test_vlseg2e64ff_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m4x2_t test_vlseg2e64ff_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m1x2_t test_vlseg2e64ff_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x2_t test_vlseg2e64ff_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_u64m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m2x2_t test_vlseg2e64ff_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x2_t test_vlseg2e64ff_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_u64m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m4x2_t test_vlseg2e64ff_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m4x2_t test_vlseg2e64ff_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e64ff_v_u64m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m1x2_t test_vlseg2e64ff_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x2_t test_vlseg2e64ff_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m1x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat64m2x2_t test_vlseg2e64ff_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x2_t test_vlseg2e64ff_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m2x2_mu(vm, vd, rs1, new_vl, vl); } -vfloat64m4x2_t test_vlseg2e64ff_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m4x2_t test_vlseg2e64ff_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_f64m4x2_mu(vm, vd, rs1, new_vl, vl); } -vint64m1x2_t test_vlseg2e64ff_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x2_t test_vlseg2e64ff_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m1x2_mu(vm, vd, rs1, new_vl, vl); } -vint64m2x2_t test_vlseg2e64ff_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x2_t test_vlseg2e64ff_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m2x2_mu(vm, vd, rs1, new_vl, vl); } -vint64m4x2_t test_vlseg2e64ff_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m4x2_t test_vlseg2e64ff_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_i64m4x2_mu(vm, vd, rs1, new_vl, vl); } -vuint64m1x2_t test_vlseg2e64ff_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x2_t test_vlseg2e64ff_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_u64m1x2_mu(vm, vd, rs1, new_vl, vl); } -vuint64m2x2_t test_vlseg2e64ff_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x2_t test_vlseg2e64ff_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_u64m2x2_mu(vm, vd, rs1, new_vl, vl); } -vuint64m4x2_t test_vlseg2e64ff_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m4x2_t test_vlseg2e64ff_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e64ff_v_u64m4x2_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8.c index af8c68c14..7a01b8376 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8.c @@ -5,194 +5,242 @@ #include -vint8mf8x2_t test_vlseg2e8_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x2_t test_vlseg2e8_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_i8mf8x2_tu(vd, rs1, vl); } -vint8mf4x2_t test_vlseg2e8_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x2_t test_vlseg2e8_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_i8mf4x2_tu(vd, rs1, vl); } -vint8mf2x2_t test_vlseg2e8_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x2_t test_vlseg2e8_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_i8mf2x2_tu(vd, rs1, vl); } -vint8m1x2_t test_vlseg2e8_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, size_t vl) { +vint8m1x2_t test_vlseg2e8_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_i8m1x2_tu(vd, rs1, vl); } -vint8m2x2_t test_vlseg2e8_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, size_t vl) { +vint8m2x2_t test_vlseg2e8_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_i8m2x2_tu(vd, rs1, vl); } -vint8m4x2_t test_vlseg2e8_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, size_t vl) { +vint8m4x2_t test_vlseg2e8_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_i8m4x2_tu(vd, rs1, vl); } -vuint8mf8x2_t test_vlseg2e8_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x2_t test_vlseg2e8_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_u8mf8x2_tu(vd, rs1, vl); } -vuint8mf4x2_t test_vlseg2e8_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x2_t test_vlseg2e8_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_u8mf4x2_tu(vd, rs1, vl); } -vuint8mf2x2_t test_vlseg2e8_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x2_t test_vlseg2e8_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_u8mf2x2_tu(vd, rs1, vl); } -vuint8m1x2_t test_vlseg2e8_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x2_t test_vlseg2e8_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_u8m1x2_tu(vd, rs1, vl); } -vuint8m2x2_t test_vlseg2e8_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x2_t test_vlseg2e8_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_u8m2x2_tu(vd, rs1, vl); } -vuint8m4x2_t test_vlseg2e8_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m4x2_t test_vlseg2e8_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg2e8_v_u8m4x2_tu(vd, rs1, vl); } -vint8mf8x2_t test_vlseg2e8_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x2_t test_vlseg2e8_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf8x2_tum(vm, vd, rs1, vl); } -vint8mf4x2_t test_vlseg2e8_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x2_t test_vlseg2e8_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf4x2_tum(vm, vd, rs1, vl); } -vint8mf2x2_t test_vlseg2e8_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x2_t test_vlseg2e8_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf2x2_tum(vm, vd, rs1, vl); } -vint8m1x2_t test_vlseg2e8_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, size_t vl) { +vint8m1x2_t test_vlseg2e8_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m1x2_tum(vm, vd, rs1, vl); } -vint8m2x2_t test_vlseg2e8_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, size_t vl) { +vint8m2x2_t test_vlseg2e8_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m2x2_tum(vm, vd, rs1, vl); } -vint8m4x2_t test_vlseg2e8_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, size_t vl) { +vint8m4x2_t test_vlseg2e8_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m4x2_tum(vm, vd, rs1, vl); } -vuint8mf8x2_t test_vlseg2e8_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x2_t test_vlseg2e8_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf8x2_tum(vm, vd, rs1, vl); } -vuint8mf4x2_t test_vlseg2e8_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x2_t test_vlseg2e8_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf4x2_tum(vm, vd, rs1, vl); } -vuint8mf2x2_t test_vlseg2e8_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x2_t test_vlseg2e8_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf2x2_tum(vm, vd, rs1, vl); } -vuint8m1x2_t test_vlseg2e8_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x2_t test_vlseg2e8_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m1x2_tum(vm, vd, rs1, vl); } -vuint8m2x2_t test_vlseg2e8_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x2_t test_vlseg2e8_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m2x2_tum(vm, vd, rs1, vl); } -vuint8m4x2_t test_vlseg2e8_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m4x2_t test_vlseg2e8_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m4x2_tum(vm, vd, rs1, vl); } -vint8mf8x2_t test_vlseg2e8_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x2_t test_vlseg2e8_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf8x2_tumu(vm, vd, rs1, vl); } -vint8mf4x2_t test_vlseg2e8_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x2_t test_vlseg2e8_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf4x2_tumu(vm, vd, rs1, vl); } -vint8mf2x2_t test_vlseg2e8_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x2_t test_vlseg2e8_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf2x2_tumu(vm, vd, rs1, vl); } -vint8m1x2_t test_vlseg2e8_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, size_t vl) { +vint8m1x2_t test_vlseg2e8_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m1x2_tumu(vm, vd, rs1, vl); } -vint8m2x2_t test_vlseg2e8_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, size_t vl) { +vint8m2x2_t test_vlseg2e8_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m2x2_tumu(vm, vd, rs1, vl); } -vint8m4x2_t test_vlseg2e8_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, size_t vl) { +vint8m4x2_t test_vlseg2e8_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m4x2_tumu(vm, vd, rs1, vl); } -vuint8mf8x2_t test_vlseg2e8_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x2_t test_vlseg2e8_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf8x2_tumu(vm, vd, rs1, vl); } -vuint8mf4x2_t test_vlseg2e8_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x2_t test_vlseg2e8_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf4x2_tumu(vm, vd, rs1, vl); } -vuint8mf2x2_t test_vlseg2e8_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x2_t test_vlseg2e8_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf2x2_tumu(vm, vd, rs1, vl); } -vuint8m1x2_t test_vlseg2e8_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x2_t test_vlseg2e8_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m1x2_tumu(vm, vd, rs1, vl); } -vuint8m2x2_t test_vlseg2e8_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x2_t test_vlseg2e8_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m2x2_tumu(vm, vd, rs1, vl); } -vuint8m4x2_t test_vlseg2e8_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m4x2_t test_vlseg2e8_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m4x2_tumu(vm, vd, rs1, vl); } -vint8mf8x2_t test_vlseg2e8_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x2_t test_vlseg2e8_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf8x2_mu(vm, vd, rs1, vl); } -vint8mf4x2_t test_vlseg2e8_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x2_t test_vlseg2e8_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf4x2_mu(vm, vd, rs1, vl); } -vint8mf2x2_t test_vlseg2e8_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x2_t test_vlseg2e8_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8mf2x2_mu(vm, vd, rs1, vl); } -vint8m1x2_t test_vlseg2e8_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, size_t vl) { +vint8m1x2_t test_vlseg2e8_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m1x2_mu(vm, vd, rs1, vl); } -vint8m2x2_t test_vlseg2e8_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, size_t vl) { +vint8m2x2_t test_vlseg2e8_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m2x2_mu(vm, vd, rs1, vl); } -vint8m4x2_t test_vlseg2e8_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, size_t vl) { +vint8m4x2_t test_vlseg2e8_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_i8m4x2_mu(vm, vd, rs1, vl); } -vuint8mf8x2_t test_vlseg2e8_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x2_t test_vlseg2e8_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf8x2_mu(vm, vd, rs1, vl); } -vuint8mf4x2_t test_vlseg2e8_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x2_t test_vlseg2e8_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf4x2_mu(vm, vd, rs1, vl); } -vuint8mf2x2_t test_vlseg2e8_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x2_t test_vlseg2e8_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8mf2x2_mu(vm, vd, rs1, vl); } -vuint8m1x2_t test_vlseg2e8_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x2_t test_vlseg2e8_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m1x2_mu(vm, vd, rs1, vl); } -vuint8m2x2_t test_vlseg2e8_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x2_t test_vlseg2e8_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m2x2_mu(vm, vd, rs1, vl); } -vuint8m4x2_t test_vlseg2e8_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, size_t vl) { +vuint8m4x2_t test_vlseg2e8_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg2e8_v_u8m4x2_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c index 48a007333..5e22ef490 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c @@ -6,194 +6,278 @@ #include -vint8mf8x2_t test_vlseg2e8ff_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x2_t test_vlseg2e8ff_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_i8mf8x2_tu(vd, rs1, new_vl, vl); } -vint8mf4x2_t test_vlseg2e8ff_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x2_t test_vlseg2e8ff_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_i8mf4x2_tu(vd, rs1, new_vl, vl); } -vint8mf2x2_t test_vlseg2e8ff_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x2_t test_vlseg2e8ff_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_i8mf2x2_tu(vd, rs1, new_vl, vl); } -vint8m1x2_t test_vlseg2e8ff_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x2_t test_vlseg2e8ff_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_i8m1x2_tu(vd, rs1, new_vl, vl); } -vint8m2x2_t test_vlseg2e8ff_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x2_t test_vlseg2e8ff_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_i8m2x2_tu(vd, rs1, new_vl, vl); } -vint8m4x2_t test_vlseg2e8ff_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m4x2_t test_vlseg2e8ff_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_i8m4x2_tu(vd, rs1, new_vl, vl); } -vuint8mf8x2_t test_vlseg2e8ff_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x2_t test_vlseg2e8ff_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_u8mf8x2_tu(vd, rs1, new_vl, vl); } -vuint8mf4x2_t test_vlseg2e8ff_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x2_t test_vlseg2e8ff_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_u8mf4x2_tu(vd, rs1, new_vl, vl); } -vuint8mf2x2_t test_vlseg2e8ff_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x2_t test_vlseg2e8ff_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_u8mf2x2_tu(vd, rs1, new_vl, vl); } -vuint8m1x2_t test_vlseg2e8ff_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x2_t test_vlseg2e8ff_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_u8m1x2_tu(vd, rs1, new_vl, vl); } -vuint8m2x2_t test_vlseg2e8ff_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x2_t test_vlseg2e8ff_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_u8m2x2_tu(vd, rs1, new_vl, vl); } -vuint8m4x2_t test_vlseg2e8ff_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m4x2_t test_vlseg2e8ff_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e8ff_v_u8m4x2_tu(vd, rs1, new_vl, vl); } -vint8mf8x2_t test_vlseg2e8ff_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x2_t test_vlseg2e8ff_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf8x2_tum(vm, vd, rs1, new_vl, vl); } -vint8mf4x2_t test_vlseg2e8ff_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x2_t test_vlseg2e8ff_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf4x2_tum(vm, vd, rs1, new_vl, vl); } -vint8mf2x2_t test_vlseg2e8ff_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x2_t test_vlseg2e8ff_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vint8m1x2_t test_vlseg2e8ff_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x2_t test_vlseg2e8ff_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m1x2_tum(vm, vd, rs1, new_vl, vl); } -vint8m2x2_t test_vlseg2e8ff_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x2_t test_vlseg2e8ff_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m2x2_tum(vm, vd, rs1, new_vl, vl); } -vint8m4x2_t test_vlseg2e8ff_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m4x2_t test_vlseg2e8ff_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m4x2_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf8x2_t test_vlseg2e8ff_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x2_t test_vlseg2e8ff_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf8x2_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf4x2_t test_vlseg2e8ff_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x2_t test_vlseg2e8ff_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf4x2_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf2x2_t test_vlseg2e8ff_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x2_t test_vlseg2e8ff_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vuint8m1x2_t test_vlseg2e8ff_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x2_t test_vlseg2e8ff_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m1x2_tum(vm, vd, rs1, new_vl, vl); } -vuint8m2x2_t test_vlseg2e8ff_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x2_t test_vlseg2e8ff_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m2x2_tum(vm, vd, rs1, new_vl, vl); } -vuint8m4x2_t test_vlseg2e8ff_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m4x2_t test_vlseg2e8ff_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m4x2_tum(vm, vd, rs1, new_vl, vl); } -vint8mf8x2_t test_vlseg2e8ff_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x2_t test_vlseg2e8ff_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf8x2_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf4x2_t test_vlseg2e8ff_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x2_t test_vlseg2e8ff_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf4x2_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf2x2_t test_vlseg2e8ff_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x2_t test_vlseg2e8ff_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vint8m1x2_t test_vlseg2e8ff_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x2_t test_vlseg2e8ff_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vint8m2x2_t test_vlseg2e8ff_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x2_t test_vlseg2e8ff_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vint8m4x2_t test_vlseg2e8ff_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m4x2_t test_vlseg2e8ff_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x2_t test_vlseg2e8ff_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x2_t test_vlseg2e8ff_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf8x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x2_t test_vlseg2e8ff_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x2_t test_vlseg2e8ff_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf4x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x2_t test_vlseg2e8ff_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x2_t test_vlseg2e8ff_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m1x2_t test_vlseg2e8ff_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x2_t test_vlseg2e8ff_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m2x2_t test_vlseg2e8ff_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x2_t test_vlseg2e8ff_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m4x2_t test_vlseg2e8ff_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m4x2_t test_vlseg2e8ff_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf8x2_t test_vlseg2e8ff_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x2_t test_vlseg2e8ff_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf8x2_mu(vm, vd, rs1, new_vl, vl); } -vint8mf4x2_t test_vlseg2e8ff_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x2_t test_vlseg2e8ff_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf4x2_mu(vm, vd, rs1, new_vl, vl); } -vint8mf2x2_t test_vlseg2e8ff_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x2_t test_vlseg2e8ff_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vint8m1x2_t test_vlseg2e8ff_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x2_t test_vlseg2e8ff_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m1x2_mu(vm, vd, rs1, new_vl, vl); } -vint8m2x2_t test_vlseg2e8ff_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x2_t test_vlseg2e8ff_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m2x2_mu(vm, vd, rs1, new_vl, vl); } -vint8m4x2_t test_vlseg2e8ff_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m4x2_t test_vlseg2e8ff_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_i8m4x2_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x2_t test_vlseg2e8ff_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x2_t test_vlseg2e8ff_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf8x2_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x2_t test_vlseg2e8ff_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x2_t test_vlseg2e8ff_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf4x2_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x2_t test_vlseg2e8ff_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x2_t test_vlseg2e8ff_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vuint8m1x2_t test_vlseg2e8ff_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x2_t test_vlseg2e8ff_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m1x2_mu(vm, vd, rs1, new_vl, vl); } -vuint8m2x2_t test_vlseg2e8ff_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x2_t test_vlseg2e8ff_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m2x2_mu(vm, vd, rs1, new_vl, vl); } -vuint8m4x2_t test_vlseg2e8ff_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m4x2_t test_vlseg2e8ff_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg2e8ff_v_u8m4x2_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c index b30bf5e72..646cd3b4d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c @@ -6,194 +6,242 @@ #include -vfloat16mf4x3_t test_vlseg3e16_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x3_t test_vlseg3e16_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16mf4x3_tu(vd, rs1, vl); } -vfloat16mf2x3_t test_vlseg3e16_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x3_t test_vlseg3e16_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16mf2x3_tu(vd, rs1, vl); } -vfloat16m1x3_t test_vlseg3e16_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x3_t test_vlseg3e16_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16m1x3_tu(vd, rs1, vl); } -vfloat16m2x3_t test_vlseg3e16_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x3_t test_vlseg3e16_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16m2x3_tu(vd, rs1, vl); } -vint16mf4x3_t test_vlseg3e16_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x3_t test_vlseg3e16_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg3e16_v_i16mf4x3_tu(vd, rs1, vl); } -vint16mf2x3_t test_vlseg3e16_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x3_t test_vlseg3e16_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg3e16_v_i16mf2x3_tu(vd, rs1, vl); } -vint16m1x3_t test_vlseg3e16_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, size_t vl) { +vint16m1x3_t test_vlseg3e16_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg3e16_v_i16m1x3_tu(vd, rs1, vl); } -vint16m2x3_t test_vlseg3e16_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, size_t vl) { +vint16m2x3_t test_vlseg3e16_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg3e16_v_i16m2x3_tu(vd, rs1, vl); } -vuint16mf4x3_t test_vlseg3e16_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x3_t test_vlseg3e16_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16mf4x3_tu(vd, rs1, vl); } -vuint16mf2x3_t test_vlseg3e16_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x3_t test_vlseg3e16_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16mf2x3_tu(vd, rs1, vl); } -vuint16m1x3_t test_vlseg3e16_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x3_t test_vlseg3e16_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg3e16_v_u16m1x3_tu(vd, rs1, vl); } -vuint16m2x3_t test_vlseg3e16_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x3_t test_vlseg3e16_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg3e16_v_u16m2x3_tu(vd, rs1, vl); } -vfloat16mf4x3_t test_vlseg3e16_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x3_t test_vlseg3e16_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16mf4x3_tum(vm, vd, rs1, vl); } -vfloat16mf2x3_t test_vlseg3e16_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x3_t test_vlseg3e16_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16mf2x3_tum(vm, vd, rs1, vl); } -vfloat16m1x3_t test_vlseg3e16_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x3_t test_vlseg3e16_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16m1x3_tum(vm, vd, rs1, vl); } -vfloat16m2x3_t test_vlseg3e16_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x3_t test_vlseg3e16_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16m2x3_tum(vm, vd, rs1, vl); } -vint16mf4x3_t test_vlseg3e16_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x3_t test_vlseg3e16_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16mf4x3_tum(vm, vd, rs1, vl); } -vint16mf2x3_t test_vlseg3e16_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x3_t test_vlseg3e16_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16mf2x3_tum(vm, vd, rs1, vl); } -vint16m1x3_t test_vlseg3e16_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, size_t vl) { +vint16m1x3_t test_vlseg3e16_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16m1x3_tum(vm, vd, rs1, vl); } -vint16m2x3_t test_vlseg3e16_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, size_t vl) { +vint16m2x3_t test_vlseg3e16_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16m2x3_tum(vm, vd, rs1, vl); } -vuint16mf4x3_t test_vlseg3e16_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x3_t test_vlseg3e16_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16mf4x3_tum(vm, vd, rs1, vl); } -vuint16mf2x3_t test_vlseg3e16_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x3_t test_vlseg3e16_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16mf2x3_tum(vm, vd, rs1, vl); } -vuint16m1x3_t test_vlseg3e16_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x3_t test_vlseg3e16_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16m1x3_tum(vm, vd, rs1, vl); } -vuint16m2x3_t test_vlseg3e16_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x3_t test_vlseg3e16_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16m2x3_tum(vm, vd, rs1, vl); } -vfloat16mf4x3_t test_vlseg3e16_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x3_t test_vlseg3e16_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16mf4x3_tumu(vm, vd, rs1, vl); } -vfloat16mf2x3_t test_vlseg3e16_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x3_t test_vlseg3e16_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16mf2x3_tumu(vm, vd, rs1, vl); } -vfloat16m1x3_t test_vlseg3e16_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x3_t test_vlseg3e16_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16m1x3_tumu(vm, vd, rs1, vl); } -vfloat16m2x3_t test_vlseg3e16_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x3_t test_vlseg3e16_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16m2x3_tumu(vm, vd, rs1, vl); } -vint16mf4x3_t test_vlseg3e16_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x3_t test_vlseg3e16_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16mf4x3_tumu(vm, vd, rs1, vl); } -vint16mf2x3_t test_vlseg3e16_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x3_t test_vlseg3e16_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16mf2x3_tumu(vm, vd, rs1, vl); } -vint16m1x3_t test_vlseg3e16_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, size_t vl) { +vint16m1x3_t test_vlseg3e16_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16m1x3_tumu(vm, vd, rs1, vl); } -vint16m2x3_t test_vlseg3e16_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, size_t vl) { +vint16m2x3_t test_vlseg3e16_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16m2x3_tumu(vm, vd, rs1, vl); } -vuint16mf4x3_t test_vlseg3e16_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x3_t test_vlseg3e16_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16mf4x3_tumu(vm, vd, rs1, vl); } -vuint16mf2x3_t test_vlseg3e16_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x3_t test_vlseg3e16_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16mf2x3_tumu(vm, vd, rs1, vl); } -vuint16m1x3_t test_vlseg3e16_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x3_t test_vlseg3e16_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16m1x3_tumu(vm, vd, rs1, vl); } -vuint16m2x3_t test_vlseg3e16_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x3_t test_vlseg3e16_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16m2x3_tumu(vm, vd, rs1, vl); } -vfloat16mf4x3_t test_vlseg3e16_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x3_t test_vlseg3e16_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16mf4x3_mu(vm, vd, rs1, vl); } -vfloat16mf2x3_t test_vlseg3e16_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x3_t test_vlseg3e16_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16mf2x3_mu(vm, vd, rs1, vl); } -vfloat16m1x3_t test_vlseg3e16_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x3_t test_vlseg3e16_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16m1x3_mu(vm, vd, rs1, vl); } -vfloat16m2x3_t test_vlseg3e16_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x3_t test_vlseg3e16_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_f16m2x3_mu(vm, vd, rs1, vl); } -vint16mf4x3_t test_vlseg3e16_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x3_t test_vlseg3e16_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16mf4x3_mu(vm, vd, rs1, vl); } -vint16mf2x3_t test_vlseg3e16_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x3_t test_vlseg3e16_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16mf2x3_mu(vm, vd, rs1, vl); } -vint16m1x3_t test_vlseg3e16_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, size_t vl) { +vint16m1x3_t test_vlseg3e16_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16m1x3_mu(vm, vd, rs1, vl); } -vint16m2x3_t test_vlseg3e16_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, size_t vl) { +vint16m2x3_t test_vlseg3e16_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_i16m2x3_mu(vm, vd, rs1, vl); } -vuint16mf4x3_t test_vlseg3e16_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x3_t test_vlseg3e16_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16mf4x3_mu(vm, vd, rs1, vl); } -vuint16mf2x3_t test_vlseg3e16_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x3_t test_vlseg3e16_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16mf2x3_mu(vm, vd, rs1, vl); } -vuint16m1x3_t test_vlseg3e16_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x3_t test_vlseg3e16_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16m1x3_mu(vm, vd, rs1, vl); } -vuint16m2x3_t test_vlseg3e16_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x3_t test_vlseg3e16_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg3e16_v_u16m2x3_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c index 574ecfc19..42fcaf2b5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c @@ -6,194 +6,292 @@ #include -vfloat16mf4x3_t test_vlseg3e16ff_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x3_t test_vlseg3e16ff_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16mf4x3_tu(vd, rs1, new_vl, vl); } -vfloat16mf2x3_t test_vlseg3e16ff_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x3_t test_vlseg3e16ff_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16mf2x3_tu(vd, rs1, new_vl, vl); } -vfloat16m1x3_t test_vlseg3e16ff_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x3_t test_vlseg3e16ff_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16m1x3_tu(vd, rs1, new_vl, vl); } -vfloat16m2x3_t test_vlseg3e16ff_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x3_t test_vlseg3e16ff_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16m2x3_tu(vd, rs1, new_vl, vl); } -vint16mf4x3_t test_vlseg3e16ff_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x3_t test_vlseg3e16ff_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16mf4x3_tu(vd, rs1, new_vl, vl); } -vint16mf2x3_t test_vlseg3e16ff_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x3_t test_vlseg3e16ff_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16mf2x3_tu(vd, rs1, new_vl, vl); } -vint16m1x3_t test_vlseg3e16ff_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x3_t test_vlseg3e16ff_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_i16m1x3_tu(vd, rs1, new_vl, vl); } -vint16m2x3_t test_vlseg3e16ff_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x3_t test_vlseg3e16ff_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_i16m2x3_tu(vd, rs1, new_vl, vl); } -vuint16mf4x3_t test_vlseg3e16ff_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x3_t test_vlseg3e16ff_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16mf4x3_tu(vd, rs1, new_vl, vl); } -vuint16mf2x3_t test_vlseg3e16ff_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x3_t test_vlseg3e16ff_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16mf2x3_tu(vd, rs1, new_vl, vl); } -vuint16m1x3_t test_vlseg3e16ff_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x3_t test_vlseg3e16ff_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_u16m1x3_tu(vd, rs1, new_vl, vl); } -vuint16m2x3_t test_vlseg3e16ff_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x3_t test_vlseg3e16ff_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_u16m2x3_tu(vd, rs1, new_vl, vl); } -vfloat16mf4x3_t test_vlseg3e16ff_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x3_t test_vlseg3e16ff_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16mf4x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x3_t test_vlseg3e16ff_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x3_t test_vlseg3e16ff_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m1x3_t test_vlseg3e16ff_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x3_t test_vlseg3e16ff_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16m1x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m2x3_t test_vlseg3e16ff_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x3_t test_vlseg3e16ff_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16m2x3_tum(vm, vd, rs1, new_vl, vl); } -vint16mf4x3_t test_vlseg3e16ff_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x3_t test_vlseg3e16ff_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_i16mf4x3_tum(vm, vd, rs1, new_vl, vl); } -vint16mf2x3_t test_vlseg3e16ff_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x3_t test_vlseg3e16ff_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_i16mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vint16m1x3_t test_vlseg3e16ff_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x3_t test_vlseg3e16ff_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16m1x3_tum(vm, vd, rs1, new_vl, vl); } -vint16m2x3_t test_vlseg3e16ff_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x3_t test_vlseg3e16ff_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16m2x3_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf4x3_t test_vlseg3e16ff_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x3_t test_vlseg3e16ff_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16mf4x3_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf2x3_t test_vlseg3e16ff_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x3_t test_vlseg3e16ff_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vuint16m1x3_t test_vlseg3e16ff_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x3_t test_vlseg3e16ff_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16m1x3_tum(vm, vd, rs1, new_vl, vl); } -vuint16m2x3_t test_vlseg3e16ff_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x3_t test_vlseg3e16ff_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16m2x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x3_t test_vlseg3e16ff_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x3_t test_vlseg3e16ff_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16mf4x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x3_t test_vlseg3e16ff_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x3_t test_vlseg3e16ff_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x3_t test_vlseg3e16ff_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x3_t test_vlseg3e16ff_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m2x3_t test_vlseg3e16ff_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x3_t test_vlseg3e16ff_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf4x3_t test_vlseg3e16ff_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x3_t test_vlseg3e16ff_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_i16mf4x3_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf2x3_t test_vlseg3e16ff_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x3_t test_vlseg3e16ff_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_i16mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vint16m1x3_t test_vlseg3e16ff_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x3_t test_vlseg3e16ff_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vint16m2x3_t test_vlseg3e16ff_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x3_t test_vlseg3e16ff_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x3_t test_vlseg3e16ff_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x3_t test_vlseg3e16ff_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16mf4x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x3_t test_vlseg3e16ff_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x3_t test_vlseg3e16ff_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m1x3_t test_vlseg3e16ff_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x3_t test_vlseg3e16ff_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m2x3_t test_vlseg3e16ff_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x3_t test_vlseg3e16ff_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x3_t test_vlseg3e16ff_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x3_t test_vlseg3e16ff_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16mf4x3_mu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x3_t test_vlseg3e16ff_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x3_t test_vlseg3e16ff_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x3_t test_vlseg3e16ff_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x3_t test_vlseg3e16ff_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16m1x3_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m2x3_t test_vlseg3e16ff_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x3_t test_vlseg3e16ff_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_f16m2x3_mu(vm, vd, rs1, new_vl, vl); } -vint16mf4x3_t test_vlseg3e16ff_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x3_t test_vlseg3e16ff_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16mf4x3_mu(vm, vd, rs1, new_vl, vl); } -vint16mf2x3_t test_vlseg3e16ff_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x3_t test_vlseg3e16ff_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vint16m1x3_t test_vlseg3e16ff_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x3_t test_vlseg3e16ff_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16m1x3_mu(vm, vd, rs1, new_vl, vl); } -vint16m2x3_t test_vlseg3e16ff_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x3_t test_vlseg3e16ff_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_i16m2x3_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x3_t test_vlseg3e16ff_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x3_t test_vlseg3e16ff_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16mf4x3_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x3_t test_vlseg3e16ff_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x3_t test_vlseg3e16ff_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_u16mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vuint16m1x3_t test_vlseg3e16ff_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x3_t test_vlseg3e16ff_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_u16m1x3_mu(vm, vd, rs1, new_vl, vl); } -vuint16m2x3_t test_vlseg3e16ff_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x3_t test_vlseg3e16ff_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e16ff_v_u16m2x3_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c index 4ec355892..a2592e6db 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c @@ -6,146 +6,182 @@ #include -vfloat32mf2x3_t test_vlseg3e32_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, size_t vl) { +vfloat32mf2x3_t test_vlseg3e32_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32mf2x3_tu(vd, rs1, vl); } -vfloat32m1x3_t test_vlseg3e32_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, size_t vl) { +vfloat32m1x3_t test_vlseg3e32_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg3e32_v_f32m1x3_tu(vd, rs1, vl); } -vfloat32m2x3_t test_vlseg3e32_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, size_t vl) { +vfloat32m2x3_t test_vlseg3e32_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg3e32_v_f32m2x3_tu(vd, rs1, vl); } -vint32mf2x3_t test_vlseg3e32_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x3_t test_vlseg3e32_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg3e32_v_i32mf2x3_tu(vd, rs1, vl); } -vint32m1x3_t test_vlseg3e32_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, size_t vl) { +vint32m1x3_t test_vlseg3e32_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg3e32_v_i32m1x3_tu(vd, rs1, vl); } -vint32m2x3_t test_vlseg3e32_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, size_t vl) { +vint32m2x3_t test_vlseg3e32_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg3e32_v_i32m2x3_tu(vd, rs1, vl); } -vuint32mf2x3_t test_vlseg3e32_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x3_t test_vlseg3e32_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32mf2x3_tu(vd, rs1, vl); } -vuint32m1x3_t test_vlseg3e32_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x3_t test_vlseg3e32_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg3e32_v_u32m1x3_tu(vd, rs1, vl); } -vuint32m2x3_t test_vlseg3e32_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x3_t test_vlseg3e32_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg3e32_v_u32m2x3_tu(vd, rs1, vl); } -vfloat32mf2x3_t test_vlseg3e32_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, size_t vl) { +vfloat32mf2x3_t test_vlseg3e32_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32mf2x3_tum(vm, vd, rs1, vl); } -vfloat32m1x3_t test_vlseg3e32_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, size_t vl) { +vfloat32m1x3_t test_vlseg3e32_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32m1x3_tum(vm, vd, rs1, vl); } -vfloat32m2x3_t test_vlseg3e32_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, size_t vl) { +vfloat32m2x3_t test_vlseg3e32_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32m2x3_tum(vm, vd, rs1, vl); } -vint32mf2x3_t test_vlseg3e32_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x3_t test_vlseg3e32_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32mf2x3_tum(vm, vd, rs1, vl); } -vint32m1x3_t test_vlseg3e32_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, size_t vl) { +vint32m1x3_t test_vlseg3e32_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32m1x3_tum(vm, vd, rs1, vl); } -vint32m2x3_t test_vlseg3e32_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, size_t vl) { +vint32m2x3_t test_vlseg3e32_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32m2x3_tum(vm, vd, rs1, vl); } -vuint32mf2x3_t test_vlseg3e32_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x3_t test_vlseg3e32_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32mf2x3_tum(vm, vd, rs1, vl); } -vuint32m1x3_t test_vlseg3e32_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x3_t test_vlseg3e32_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32m1x3_tum(vm, vd, rs1, vl); } -vuint32m2x3_t test_vlseg3e32_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x3_t test_vlseg3e32_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32m2x3_tum(vm, vd, rs1, vl); } -vfloat32mf2x3_t test_vlseg3e32_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, size_t vl) { +vfloat32mf2x3_t test_vlseg3e32_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32mf2x3_tumu(vm, vd, rs1, vl); } -vfloat32m1x3_t test_vlseg3e32_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, size_t vl) { +vfloat32m1x3_t test_vlseg3e32_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32m1x3_tumu(vm, vd, rs1, vl); } -vfloat32m2x3_t test_vlseg3e32_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, size_t vl) { +vfloat32m2x3_t test_vlseg3e32_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32m2x3_tumu(vm, vd, rs1, vl); } -vint32mf2x3_t test_vlseg3e32_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x3_t test_vlseg3e32_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32mf2x3_tumu(vm, vd, rs1, vl); } -vint32m1x3_t test_vlseg3e32_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, size_t vl) { +vint32m1x3_t test_vlseg3e32_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32m1x3_tumu(vm, vd, rs1, vl); } -vint32m2x3_t test_vlseg3e32_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, size_t vl) { +vint32m2x3_t test_vlseg3e32_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32m2x3_tumu(vm, vd, rs1, vl); } -vuint32mf2x3_t test_vlseg3e32_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x3_t test_vlseg3e32_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32mf2x3_tumu(vm, vd, rs1, vl); } -vuint32m1x3_t test_vlseg3e32_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x3_t test_vlseg3e32_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32m1x3_tumu(vm, vd, rs1, vl); } -vuint32m2x3_t test_vlseg3e32_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x3_t test_vlseg3e32_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32m2x3_tumu(vm, vd, rs1, vl); } -vfloat32mf2x3_t test_vlseg3e32_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, size_t vl) { +vfloat32mf2x3_t test_vlseg3e32_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32mf2x3_mu(vm, vd, rs1, vl); } -vfloat32m1x3_t test_vlseg3e32_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, size_t vl) { +vfloat32m1x3_t test_vlseg3e32_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32m1x3_mu(vm, vd, rs1, vl); } -vfloat32m2x3_t test_vlseg3e32_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, size_t vl) { +vfloat32m2x3_t test_vlseg3e32_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg3e32_v_f32m2x3_mu(vm, vd, rs1, vl); } -vint32mf2x3_t test_vlseg3e32_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x3_t test_vlseg3e32_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32mf2x3_mu(vm, vd, rs1, vl); } -vint32m1x3_t test_vlseg3e32_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, size_t vl) { +vint32m1x3_t test_vlseg3e32_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32m1x3_mu(vm, vd, rs1, vl); } -vint32m2x3_t test_vlseg3e32_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, size_t vl) { +vint32m2x3_t test_vlseg3e32_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_i32m2x3_mu(vm, vd, rs1, vl); } -vuint32mf2x3_t test_vlseg3e32_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x3_t test_vlseg3e32_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32mf2x3_mu(vm, vd, rs1, vl); } -vuint32m1x3_t test_vlseg3e32_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x3_t test_vlseg3e32_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32m1x3_mu(vm, vd, rs1, vl); } -vuint32m2x3_t test_vlseg3e32_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x3_t test_vlseg3e32_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg3e32_v_u32m2x3_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c index 05cbcb5ee..27f06ccc0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c @@ -6,146 +6,218 @@ #include -vfloat32mf2x3_t test_vlseg3e32ff_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x3_t test_vlseg3e32ff_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32mf2x3_tu(vd, rs1, new_vl, vl); } -vfloat32m1x3_t test_vlseg3e32ff_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x3_t test_vlseg3e32ff_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32m1x3_tu(vd, rs1, new_vl, vl); } -vfloat32m2x3_t test_vlseg3e32ff_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x3_t test_vlseg3e32ff_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32m2x3_tu(vd, rs1, new_vl, vl); } -vint32mf2x3_t test_vlseg3e32ff_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x3_t test_vlseg3e32ff_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_i32mf2x3_tu(vd, rs1, new_vl, vl); } -vint32m1x3_t test_vlseg3e32ff_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x3_t test_vlseg3e32ff_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_i32m1x3_tu(vd, rs1, new_vl, vl); } -vint32m2x3_t test_vlseg3e32ff_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x3_t test_vlseg3e32ff_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_i32m2x3_tu(vd, rs1, new_vl, vl); } -vuint32mf2x3_t test_vlseg3e32ff_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x3_t test_vlseg3e32ff_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_u32mf2x3_tu(vd, rs1, new_vl, vl); } -vuint32m1x3_t test_vlseg3e32ff_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x3_t test_vlseg3e32ff_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_u32m1x3_tu(vd, rs1, new_vl, vl); } -vuint32m2x3_t test_vlseg3e32ff_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x3_t test_vlseg3e32ff_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_u32m2x3_tu(vd, rs1, new_vl, vl); } -vfloat32mf2x3_t test_vlseg3e32ff_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x3_t test_vlseg3e32ff_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_f32mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m1x3_t test_vlseg3e32ff_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x3_t test_vlseg3e32ff_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32m1x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m2x3_t test_vlseg3e32ff_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x3_t test_vlseg3e32ff_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32m2x3_tum(vm, vd, rs1, new_vl, vl); } -vint32mf2x3_t test_vlseg3e32ff_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x3_t test_vlseg3e32ff_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_i32mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vint32m1x3_t test_vlseg3e32ff_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x3_t test_vlseg3e32ff_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_i32m1x3_tum(vm, vd, rs1, new_vl, vl); } -vint32m2x3_t test_vlseg3e32ff_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x3_t test_vlseg3e32ff_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_i32m2x3_tum(vm, vd, rs1, new_vl, vl); } -vuint32mf2x3_t test_vlseg3e32ff_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x3_t test_vlseg3e32ff_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_u32mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vuint32m1x3_t test_vlseg3e32ff_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x3_t test_vlseg3e32ff_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_u32m1x3_tum(vm, vd, rs1, new_vl, vl); } -vuint32m2x3_t test_vlseg3e32ff_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x3_t test_vlseg3e32ff_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_u32m2x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x3_t test_vlseg3e32ff_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x3_t test_vlseg3e32ff_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_f32mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x3_t test_vlseg3e32ff_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x3_t test_vlseg3e32ff_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m2x3_t test_vlseg3e32ff_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x3_t test_vlseg3e32ff_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vint32mf2x3_t test_vlseg3e32ff_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x3_t test_vlseg3e32ff_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_i32mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vint32m1x3_t test_vlseg3e32ff_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x3_t test_vlseg3e32ff_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_i32m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vint32m2x3_t test_vlseg3e32ff_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x3_t test_vlseg3e32ff_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_i32m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x3_t test_vlseg3e32ff_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x3_t test_vlseg3e32ff_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_u32mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m1x3_t test_vlseg3e32ff_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x3_t test_vlseg3e32ff_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_u32m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m2x3_t test_vlseg3e32ff_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x3_t test_vlseg3e32ff_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_u32m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x3_t test_vlseg3e32ff_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x3_t test_vlseg3e32ff_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x3_t test_vlseg3e32ff_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x3_t test_vlseg3e32ff_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32m1x3_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m2x3_t test_vlseg3e32ff_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x3_t test_vlseg3e32ff_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_f32m2x3_mu(vm, vd, rs1, new_vl, vl); } -vint32mf2x3_t test_vlseg3e32ff_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x3_t test_vlseg3e32ff_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_i32mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vint32m1x3_t test_vlseg3e32ff_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x3_t test_vlseg3e32ff_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_i32m1x3_mu(vm, vd, rs1, new_vl, vl); } -vint32m2x3_t test_vlseg3e32ff_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x3_t test_vlseg3e32ff_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_i32m2x3_mu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x3_t test_vlseg3e32ff_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x3_t test_vlseg3e32ff_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e32ff_v_u32mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vuint32m1x3_t test_vlseg3e32ff_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x3_t test_vlseg3e32ff_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_u32m1x3_mu(vm, vd, rs1, new_vl, vl); } -vuint32m2x3_t test_vlseg3e32ff_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x3_t test_vlseg3e32ff_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e32ff_v_u32m2x3_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c index 84192a9e6..1c2894a1c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c @@ -6,98 +6,122 @@ #include -vfloat64m1x3_t test_vlseg3e64_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, size_t vl) { +vfloat64m1x3_t test_vlseg3e64_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg3e64_v_f64m1x3_tu(vd, rs1, vl); } -vfloat64m2x3_t test_vlseg3e64_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, size_t vl) { +vfloat64m2x3_t test_vlseg3e64_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg3e64_v_f64m2x3_tu(vd, rs1, vl); } -vint64m1x3_t test_vlseg3e64_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, size_t vl) { +vint64m1x3_t test_vlseg3e64_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg3e64_v_i64m1x3_tu(vd, rs1, vl); } -vint64m2x3_t test_vlseg3e64_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, size_t vl) { +vint64m2x3_t test_vlseg3e64_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg3e64_v_i64m2x3_tu(vd, rs1, vl); } -vuint64m1x3_t test_vlseg3e64_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x3_t test_vlseg3e64_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg3e64_v_u64m1x3_tu(vd, rs1, vl); } -vuint64m2x3_t test_vlseg3e64_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x3_t test_vlseg3e64_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg3e64_v_u64m2x3_tu(vd, rs1, vl); } -vfloat64m1x3_t test_vlseg3e64_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, size_t vl) { +vfloat64m1x3_t test_vlseg3e64_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg3e64_v_f64m1x3_tum(vm, vd, rs1, vl); } -vfloat64m2x3_t test_vlseg3e64_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, size_t vl) { +vfloat64m2x3_t test_vlseg3e64_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg3e64_v_f64m2x3_tum(vm, vd, rs1, vl); } -vint64m1x3_t test_vlseg3e64_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, size_t vl) { +vint64m1x3_t test_vlseg3e64_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_i64m1x3_tum(vm, vd, rs1, vl); } -vint64m2x3_t test_vlseg3e64_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, size_t vl) { +vint64m2x3_t test_vlseg3e64_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_i64m2x3_tum(vm, vd, rs1, vl); } -vuint64m1x3_t test_vlseg3e64_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x3_t test_vlseg3e64_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_u64m1x3_tum(vm, vd, rs1, vl); } -vuint64m2x3_t test_vlseg3e64_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x3_t test_vlseg3e64_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_u64m2x3_tum(vm, vd, rs1, vl); } -vfloat64m1x3_t test_vlseg3e64_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, size_t vl) { +vfloat64m1x3_t test_vlseg3e64_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg3e64_v_f64m1x3_tumu(vm, vd, rs1, vl); } -vfloat64m2x3_t test_vlseg3e64_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, size_t vl) { +vfloat64m2x3_t test_vlseg3e64_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg3e64_v_f64m2x3_tumu(vm, vd, rs1, vl); } -vint64m1x3_t test_vlseg3e64_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, size_t vl) { +vint64m1x3_t test_vlseg3e64_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_i64m1x3_tumu(vm, vd, rs1, vl); } -vint64m2x3_t test_vlseg3e64_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, size_t vl) { +vint64m2x3_t test_vlseg3e64_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_i64m2x3_tumu(vm, vd, rs1, vl); } -vuint64m1x3_t test_vlseg3e64_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x3_t test_vlseg3e64_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_u64m1x3_tumu(vm, vd, rs1, vl); } -vuint64m2x3_t test_vlseg3e64_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x3_t test_vlseg3e64_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_u64m2x3_tumu(vm, vd, rs1, vl); } -vfloat64m1x3_t test_vlseg3e64_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, size_t vl) { +vfloat64m1x3_t test_vlseg3e64_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg3e64_v_f64m1x3_mu(vm, vd, rs1, vl); } -vfloat64m2x3_t test_vlseg3e64_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, size_t vl) { +vfloat64m2x3_t test_vlseg3e64_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg3e64_v_f64m2x3_mu(vm, vd, rs1, vl); } -vint64m1x3_t test_vlseg3e64_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, size_t vl) { +vint64m1x3_t test_vlseg3e64_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_i64m1x3_mu(vm, vd, rs1, vl); } -vint64m2x3_t test_vlseg3e64_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, size_t vl) { +vint64m2x3_t test_vlseg3e64_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_i64m2x3_mu(vm, vd, rs1, vl); } -vuint64m1x3_t test_vlseg3e64_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x3_t test_vlseg3e64_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_u64m1x3_mu(vm, vd, rs1, vl); } -vuint64m2x3_t test_vlseg3e64_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x3_t test_vlseg3e64_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg3e64_v_u64m2x3_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c index 21a681a53..a6643f8f5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c @@ -6,98 +6,144 @@ #include -vfloat64m1x3_t test_vlseg3e64ff_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x3_t test_vlseg3e64ff_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_f64m1x3_tu(vd, rs1, new_vl, vl); } -vfloat64m2x3_t test_vlseg3e64ff_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x3_t test_vlseg3e64ff_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_f64m2x3_tu(vd, rs1, new_vl, vl); } -vint64m1x3_t test_vlseg3e64ff_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x3_t test_vlseg3e64ff_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e64ff_v_i64m1x3_tu(vd, rs1, new_vl, vl); } -vint64m2x3_t test_vlseg3e64ff_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x3_t test_vlseg3e64ff_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e64ff_v_i64m2x3_tu(vd, rs1, new_vl, vl); } -vuint64m1x3_t test_vlseg3e64ff_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x3_t test_vlseg3e64ff_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_u64m1x3_tu(vd, rs1, new_vl, vl); } -vuint64m2x3_t test_vlseg3e64ff_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x3_t test_vlseg3e64ff_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_u64m2x3_tu(vd, rs1, new_vl, vl); } -vfloat64m1x3_t test_vlseg3e64ff_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x3_t test_vlseg3e64ff_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_f64m1x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m2x3_t test_vlseg3e64ff_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x3_t test_vlseg3e64ff_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_f64m2x3_tum(vm, vd, rs1, new_vl, vl); } -vint64m1x3_t test_vlseg3e64ff_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x3_t test_vlseg3e64ff_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_i64m1x3_tum(vm, vd, rs1, new_vl, vl); } -vint64m2x3_t test_vlseg3e64ff_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x3_t test_vlseg3e64ff_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_i64m2x3_tum(vm, vd, rs1, new_vl, vl); } -vuint64m1x3_t test_vlseg3e64ff_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x3_t test_vlseg3e64ff_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e64ff_v_u64m1x3_tum(vm, vd, rs1, new_vl, vl); } -vuint64m2x3_t test_vlseg3e64ff_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x3_t test_vlseg3e64ff_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e64ff_v_u64m2x3_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m1x3_t test_vlseg3e64ff_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x3_t test_vlseg3e64ff_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e64ff_v_f64m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m2x3_t test_vlseg3e64ff_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x3_t test_vlseg3e64ff_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e64ff_v_f64m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vint64m1x3_t test_vlseg3e64ff_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x3_t test_vlseg3e64ff_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_i64m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vint64m2x3_t test_vlseg3e64ff_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x3_t test_vlseg3e64ff_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_i64m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m1x3_t test_vlseg3e64ff_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x3_t test_vlseg3e64ff_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e64ff_v_u64m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m2x3_t test_vlseg3e64ff_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x3_t test_vlseg3e64ff_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e64ff_v_u64m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m1x3_t test_vlseg3e64ff_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x3_t test_vlseg3e64ff_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_f64m1x3_mu(vm, vd, rs1, new_vl, vl); } -vfloat64m2x3_t test_vlseg3e64ff_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x3_t test_vlseg3e64ff_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_f64m2x3_mu(vm, vd, rs1, new_vl, vl); } -vint64m1x3_t test_vlseg3e64ff_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x3_t test_vlseg3e64ff_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_i64m1x3_mu(vm, vd, rs1, new_vl, vl); } -vint64m2x3_t test_vlseg3e64ff_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x3_t test_vlseg3e64ff_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_i64m2x3_mu(vm, vd, rs1, new_vl, vl); } -vuint64m1x3_t test_vlseg3e64ff_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x3_t test_vlseg3e64ff_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_u64m1x3_mu(vm, vd, rs1, new_vl, vl); } -vuint64m2x3_t test_vlseg3e64ff_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x3_t test_vlseg3e64ff_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e64ff_v_u64m2x3_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8.c index bc38108c1..cd1a55c98 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8.c @@ -5,162 +5,202 @@ #include -vint8mf8x3_t test_vlseg3e8_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x3_t test_vlseg3e8_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_i8mf8x3_tu(vd, rs1, vl); } -vint8mf4x3_t test_vlseg3e8_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x3_t test_vlseg3e8_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_i8mf4x3_tu(vd, rs1, vl); } -vint8mf2x3_t test_vlseg3e8_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x3_t test_vlseg3e8_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_i8mf2x3_tu(vd, rs1, vl); } -vint8m1x3_t test_vlseg3e8_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, size_t vl) { +vint8m1x3_t test_vlseg3e8_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_i8m1x3_tu(vd, rs1, vl); } -vint8m2x3_t test_vlseg3e8_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, size_t vl) { +vint8m2x3_t test_vlseg3e8_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_i8m2x3_tu(vd, rs1, vl); } -vuint8mf8x3_t test_vlseg3e8_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x3_t test_vlseg3e8_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_u8mf8x3_tu(vd, rs1, vl); } -vuint8mf4x3_t test_vlseg3e8_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x3_t test_vlseg3e8_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_u8mf4x3_tu(vd, rs1, vl); } -vuint8mf2x3_t test_vlseg3e8_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x3_t test_vlseg3e8_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_u8mf2x3_tu(vd, rs1, vl); } -vuint8m1x3_t test_vlseg3e8_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x3_t test_vlseg3e8_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_u8m1x3_tu(vd, rs1, vl); } -vuint8m2x3_t test_vlseg3e8_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x3_t test_vlseg3e8_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg3e8_v_u8m2x3_tu(vd, rs1, vl); } -vint8mf8x3_t test_vlseg3e8_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x3_t test_vlseg3e8_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf8x3_tum(vm, vd, rs1, vl); } -vint8mf4x3_t test_vlseg3e8_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x3_t test_vlseg3e8_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf4x3_tum(vm, vd, rs1, vl); } -vint8mf2x3_t test_vlseg3e8_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x3_t test_vlseg3e8_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf2x3_tum(vm, vd, rs1, vl); } -vint8m1x3_t test_vlseg3e8_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, size_t vl) { +vint8m1x3_t test_vlseg3e8_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8m1x3_tum(vm, vd, rs1, vl); } -vint8m2x3_t test_vlseg3e8_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, size_t vl) { +vint8m2x3_t test_vlseg3e8_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8m2x3_tum(vm, vd, rs1, vl); } -vuint8mf8x3_t test_vlseg3e8_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x3_t test_vlseg3e8_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf8x3_tum(vm, vd, rs1, vl); } -vuint8mf4x3_t test_vlseg3e8_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x3_t test_vlseg3e8_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf4x3_tum(vm, vd, rs1, vl); } -vuint8mf2x3_t test_vlseg3e8_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x3_t test_vlseg3e8_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf2x3_tum(vm, vd, rs1, vl); } -vuint8m1x3_t test_vlseg3e8_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x3_t test_vlseg3e8_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8m1x3_tum(vm, vd, rs1, vl); } -vuint8m2x3_t test_vlseg3e8_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x3_t test_vlseg3e8_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8m2x3_tum(vm, vd, rs1, vl); } -vint8mf8x3_t test_vlseg3e8_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x3_t test_vlseg3e8_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf8x3_tumu(vm, vd, rs1, vl); } -vint8mf4x3_t test_vlseg3e8_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x3_t test_vlseg3e8_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf4x3_tumu(vm, vd, rs1, vl); } -vint8mf2x3_t test_vlseg3e8_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x3_t test_vlseg3e8_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf2x3_tumu(vm, vd, rs1, vl); } -vint8m1x3_t test_vlseg3e8_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, size_t vl) { +vint8m1x3_t test_vlseg3e8_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8m1x3_tumu(vm, vd, rs1, vl); } -vint8m2x3_t test_vlseg3e8_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, size_t vl) { +vint8m2x3_t test_vlseg3e8_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8m2x3_tumu(vm, vd, rs1, vl); } -vuint8mf8x3_t test_vlseg3e8_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x3_t test_vlseg3e8_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf8x3_tumu(vm, vd, rs1, vl); } -vuint8mf4x3_t test_vlseg3e8_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x3_t test_vlseg3e8_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf4x3_tumu(vm, vd, rs1, vl); } -vuint8mf2x3_t test_vlseg3e8_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x3_t test_vlseg3e8_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf2x3_tumu(vm, vd, rs1, vl); } -vuint8m1x3_t test_vlseg3e8_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x3_t test_vlseg3e8_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8m1x3_tumu(vm, vd, rs1, vl); } -vuint8m2x3_t test_vlseg3e8_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x3_t test_vlseg3e8_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8m2x3_tumu(vm, vd, rs1, vl); } -vint8mf8x3_t test_vlseg3e8_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x3_t test_vlseg3e8_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf8x3_mu(vm, vd, rs1, vl); } -vint8mf4x3_t test_vlseg3e8_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x3_t test_vlseg3e8_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf4x3_mu(vm, vd, rs1, vl); } -vint8mf2x3_t test_vlseg3e8_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x3_t test_vlseg3e8_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8mf2x3_mu(vm, vd, rs1, vl); } -vint8m1x3_t test_vlseg3e8_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, size_t vl) { +vint8m1x3_t test_vlseg3e8_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8m1x3_mu(vm, vd, rs1, vl); } -vint8m2x3_t test_vlseg3e8_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, size_t vl) { +vint8m2x3_t test_vlseg3e8_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_i8m2x3_mu(vm, vd, rs1, vl); } -vuint8mf8x3_t test_vlseg3e8_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x3_t test_vlseg3e8_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf8x3_mu(vm, vd, rs1, vl); } -vuint8mf4x3_t test_vlseg3e8_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x3_t test_vlseg3e8_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf4x3_mu(vm, vd, rs1, vl); } -vuint8mf2x3_t test_vlseg3e8_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x3_t test_vlseg3e8_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8mf2x3_mu(vm, vd, rs1, vl); } -vuint8m1x3_t test_vlseg3e8_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x3_t test_vlseg3e8_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8m1x3_mu(vm, vd, rs1, vl); } -vuint8m2x3_t test_vlseg3e8_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x3_t test_vlseg3e8_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg3e8_v_u8m2x3_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c index fa952de63..2f50d75dc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c @@ -6,162 +6,232 @@ #include -vint8mf8x3_t test_vlseg3e8ff_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x3_t test_vlseg3e8ff_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_i8mf8x3_tu(vd, rs1, new_vl, vl); } -vint8mf4x3_t test_vlseg3e8ff_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x3_t test_vlseg3e8ff_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_i8mf4x3_tu(vd, rs1, new_vl, vl); } -vint8mf2x3_t test_vlseg3e8ff_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x3_t test_vlseg3e8ff_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_i8mf2x3_tu(vd, rs1, new_vl, vl); } -vint8m1x3_t test_vlseg3e8ff_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x3_t test_vlseg3e8ff_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_i8m1x3_tu(vd, rs1, new_vl, vl); } -vint8m2x3_t test_vlseg3e8ff_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x3_t test_vlseg3e8ff_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_i8m2x3_tu(vd, rs1, new_vl, vl); } -vuint8mf8x3_t test_vlseg3e8ff_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x3_t test_vlseg3e8ff_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_u8mf8x3_tu(vd, rs1, new_vl, vl); } -vuint8mf4x3_t test_vlseg3e8ff_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x3_t test_vlseg3e8ff_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_u8mf4x3_tu(vd, rs1, new_vl, vl); } -vuint8mf2x3_t test_vlseg3e8ff_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x3_t test_vlseg3e8ff_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_u8mf2x3_tu(vd, rs1, new_vl, vl); } -vuint8m1x3_t test_vlseg3e8ff_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x3_t test_vlseg3e8ff_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_u8m1x3_tu(vd, rs1, new_vl, vl); } -vuint8m2x3_t test_vlseg3e8ff_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x3_t test_vlseg3e8ff_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e8ff_v_u8m2x3_tu(vd, rs1, new_vl, vl); } -vint8mf8x3_t test_vlseg3e8ff_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x3_t test_vlseg3e8ff_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf8x3_tum(vm, vd, rs1, new_vl, vl); } -vint8mf4x3_t test_vlseg3e8ff_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x3_t test_vlseg3e8ff_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf4x3_tum(vm, vd, rs1, new_vl, vl); } -vint8mf2x3_t test_vlseg3e8ff_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x3_t test_vlseg3e8ff_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vint8m1x3_t test_vlseg3e8ff_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x3_t test_vlseg3e8ff_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8m1x3_tum(vm, vd, rs1, new_vl, vl); } -vint8m2x3_t test_vlseg3e8ff_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x3_t test_vlseg3e8ff_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8m2x3_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf8x3_t test_vlseg3e8ff_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x3_t test_vlseg3e8ff_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf8x3_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf4x3_t test_vlseg3e8ff_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x3_t test_vlseg3e8ff_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf4x3_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf2x3_t test_vlseg3e8ff_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x3_t test_vlseg3e8ff_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vuint8m1x3_t test_vlseg3e8ff_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x3_t test_vlseg3e8ff_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8m1x3_tum(vm, vd, rs1, new_vl, vl); } -vuint8m2x3_t test_vlseg3e8ff_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x3_t test_vlseg3e8ff_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8m2x3_tum(vm, vd, rs1, new_vl, vl); } -vint8mf8x3_t test_vlseg3e8ff_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x3_t test_vlseg3e8ff_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf8x3_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf4x3_t test_vlseg3e8ff_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x3_t test_vlseg3e8ff_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf4x3_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf2x3_t test_vlseg3e8ff_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x3_t test_vlseg3e8ff_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vint8m1x3_t test_vlseg3e8ff_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x3_t test_vlseg3e8ff_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vint8m2x3_t test_vlseg3e8ff_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x3_t test_vlseg3e8ff_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x3_t test_vlseg3e8ff_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x3_t test_vlseg3e8ff_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf8x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x3_t test_vlseg3e8ff_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x3_t test_vlseg3e8ff_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf4x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x3_t test_vlseg3e8ff_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x3_t test_vlseg3e8ff_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m1x3_t test_vlseg3e8ff_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x3_t test_vlseg3e8ff_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m2x3_t test_vlseg3e8ff_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x3_t test_vlseg3e8ff_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf8x3_t test_vlseg3e8ff_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x3_t test_vlseg3e8ff_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf8x3_mu(vm, vd, rs1, new_vl, vl); } -vint8mf4x3_t test_vlseg3e8ff_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x3_t test_vlseg3e8ff_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf4x3_mu(vm, vd, rs1, new_vl, vl); } -vint8mf2x3_t test_vlseg3e8ff_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x3_t test_vlseg3e8ff_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vint8m1x3_t test_vlseg3e8ff_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x3_t test_vlseg3e8ff_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8m1x3_mu(vm, vd, rs1, new_vl, vl); } -vint8m2x3_t test_vlseg3e8ff_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x3_t test_vlseg3e8ff_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_i8m2x3_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x3_t test_vlseg3e8ff_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x3_t test_vlseg3e8ff_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf8x3_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x3_t test_vlseg3e8ff_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x3_t test_vlseg3e8ff_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf4x3_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x3_t test_vlseg3e8ff_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x3_t test_vlseg3e8ff_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vuint8m1x3_t test_vlseg3e8ff_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x3_t test_vlseg3e8ff_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8m1x3_mu(vm, vd, rs1, new_vl, vl); } -vuint8m2x3_t test_vlseg3e8ff_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x3_t test_vlseg3e8ff_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg3e8ff_v_u8m2x3_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c index 57c616186..6bd4e1e57 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c @@ -6,194 +6,242 @@ #include -vfloat16mf4x4_t test_vlseg4e16_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x4_t test_vlseg4e16_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16mf4x4_tu(vd, rs1, vl); } -vfloat16mf2x4_t test_vlseg4e16_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x4_t test_vlseg4e16_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16mf2x4_tu(vd, rs1, vl); } -vfloat16m1x4_t test_vlseg4e16_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x4_t test_vlseg4e16_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16m1x4_tu(vd, rs1, vl); } -vfloat16m2x4_t test_vlseg4e16_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x4_t test_vlseg4e16_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16m2x4_tu(vd, rs1, vl); } -vint16mf4x4_t test_vlseg4e16_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x4_t test_vlseg4e16_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg4e16_v_i16mf4x4_tu(vd, rs1, vl); } -vint16mf2x4_t test_vlseg4e16_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x4_t test_vlseg4e16_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg4e16_v_i16mf2x4_tu(vd, rs1, vl); } -vint16m1x4_t test_vlseg4e16_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, size_t vl) { +vint16m1x4_t test_vlseg4e16_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg4e16_v_i16m1x4_tu(vd, rs1, vl); } -vint16m2x4_t test_vlseg4e16_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, size_t vl) { +vint16m2x4_t test_vlseg4e16_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg4e16_v_i16m2x4_tu(vd, rs1, vl); } -vuint16mf4x4_t test_vlseg4e16_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x4_t test_vlseg4e16_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16mf4x4_tu(vd, rs1, vl); } -vuint16mf2x4_t test_vlseg4e16_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x4_t test_vlseg4e16_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16mf2x4_tu(vd, rs1, vl); } -vuint16m1x4_t test_vlseg4e16_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x4_t test_vlseg4e16_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg4e16_v_u16m1x4_tu(vd, rs1, vl); } -vuint16m2x4_t test_vlseg4e16_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x4_t test_vlseg4e16_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg4e16_v_u16m2x4_tu(vd, rs1, vl); } -vfloat16mf4x4_t test_vlseg4e16_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x4_t test_vlseg4e16_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16mf4x4_tum(vm, vd, rs1, vl); } -vfloat16mf2x4_t test_vlseg4e16_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x4_t test_vlseg4e16_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16mf2x4_tum(vm, vd, rs1, vl); } -vfloat16m1x4_t test_vlseg4e16_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x4_t test_vlseg4e16_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16m1x4_tum(vm, vd, rs1, vl); } -vfloat16m2x4_t test_vlseg4e16_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x4_t test_vlseg4e16_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16m2x4_tum(vm, vd, rs1, vl); } -vint16mf4x4_t test_vlseg4e16_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x4_t test_vlseg4e16_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16mf4x4_tum(vm, vd, rs1, vl); } -vint16mf2x4_t test_vlseg4e16_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x4_t test_vlseg4e16_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16mf2x4_tum(vm, vd, rs1, vl); } -vint16m1x4_t test_vlseg4e16_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, size_t vl) { +vint16m1x4_t test_vlseg4e16_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16m1x4_tum(vm, vd, rs1, vl); } -vint16m2x4_t test_vlseg4e16_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, size_t vl) { +vint16m2x4_t test_vlseg4e16_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16m2x4_tum(vm, vd, rs1, vl); } -vuint16mf4x4_t test_vlseg4e16_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x4_t test_vlseg4e16_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16mf4x4_tum(vm, vd, rs1, vl); } -vuint16mf2x4_t test_vlseg4e16_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x4_t test_vlseg4e16_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16mf2x4_tum(vm, vd, rs1, vl); } -vuint16m1x4_t test_vlseg4e16_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x4_t test_vlseg4e16_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16m1x4_tum(vm, vd, rs1, vl); } -vuint16m2x4_t test_vlseg4e16_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x4_t test_vlseg4e16_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16m2x4_tum(vm, vd, rs1, vl); } -vfloat16mf4x4_t test_vlseg4e16_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x4_t test_vlseg4e16_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16mf4x4_tumu(vm, vd, rs1, vl); } -vfloat16mf2x4_t test_vlseg4e16_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x4_t test_vlseg4e16_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16mf2x4_tumu(vm, vd, rs1, vl); } -vfloat16m1x4_t test_vlseg4e16_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x4_t test_vlseg4e16_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16m1x4_tumu(vm, vd, rs1, vl); } -vfloat16m2x4_t test_vlseg4e16_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x4_t test_vlseg4e16_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16m2x4_tumu(vm, vd, rs1, vl); } -vint16mf4x4_t test_vlseg4e16_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x4_t test_vlseg4e16_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16mf4x4_tumu(vm, vd, rs1, vl); } -vint16mf2x4_t test_vlseg4e16_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x4_t test_vlseg4e16_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16mf2x4_tumu(vm, vd, rs1, vl); } -vint16m1x4_t test_vlseg4e16_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, size_t vl) { +vint16m1x4_t test_vlseg4e16_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16m1x4_tumu(vm, vd, rs1, vl); } -vint16m2x4_t test_vlseg4e16_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, size_t vl) { +vint16m2x4_t test_vlseg4e16_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16m2x4_tumu(vm, vd, rs1, vl); } -vuint16mf4x4_t test_vlseg4e16_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x4_t test_vlseg4e16_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16mf4x4_tumu(vm, vd, rs1, vl); } -vuint16mf2x4_t test_vlseg4e16_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x4_t test_vlseg4e16_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16mf2x4_tumu(vm, vd, rs1, vl); } -vuint16m1x4_t test_vlseg4e16_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x4_t test_vlseg4e16_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16m1x4_tumu(vm, vd, rs1, vl); } -vuint16m2x4_t test_vlseg4e16_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x4_t test_vlseg4e16_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16m2x4_tumu(vm, vd, rs1, vl); } -vfloat16mf4x4_t test_vlseg4e16_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x4_t test_vlseg4e16_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16mf4x4_mu(vm, vd, rs1, vl); } -vfloat16mf2x4_t test_vlseg4e16_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x4_t test_vlseg4e16_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16mf2x4_mu(vm, vd, rs1, vl); } -vfloat16m1x4_t test_vlseg4e16_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x4_t test_vlseg4e16_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16m1x4_mu(vm, vd, rs1, vl); } -vfloat16m2x4_t test_vlseg4e16_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m2x4_t test_vlseg4e16_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_f16m2x4_mu(vm, vd, rs1, vl); } -vint16mf4x4_t test_vlseg4e16_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x4_t test_vlseg4e16_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16mf4x4_mu(vm, vd, rs1, vl); } -vint16mf2x4_t test_vlseg4e16_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x4_t test_vlseg4e16_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16mf2x4_mu(vm, vd, rs1, vl); } -vint16m1x4_t test_vlseg4e16_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, size_t vl) { +vint16m1x4_t test_vlseg4e16_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16m1x4_mu(vm, vd, rs1, vl); } -vint16m2x4_t test_vlseg4e16_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, size_t vl) { +vint16m2x4_t test_vlseg4e16_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_i16m2x4_mu(vm, vd, rs1, vl); } -vuint16mf4x4_t test_vlseg4e16_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x4_t test_vlseg4e16_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16mf4x4_mu(vm, vd, rs1, vl); } -vuint16mf2x4_t test_vlseg4e16_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x4_t test_vlseg4e16_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16mf2x4_mu(vm, vd, rs1, vl); } -vuint16m1x4_t test_vlseg4e16_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x4_t test_vlseg4e16_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16m1x4_mu(vm, vd, rs1, vl); } -vuint16m2x4_t test_vlseg4e16_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, size_t vl) { +vuint16m2x4_t test_vlseg4e16_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg4e16_v_u16m2x4_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c index b144b32ad..42d4c3620 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c @@ -6,194 +6,292 @@ #include -vfloat16mf4x4_t test_vlseg4e16ff_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x4_t test_vlseg4e16ff_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16mf4x4_tu(vd, rs1, new_vl, vl); } -vfloat16mf2x4_t test_vlseg4e16ff_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x4_t test_vlseg4e16ff_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16mf2x4_tu(vd, rs1, new_vl, vl); } -vfloat16m1x4_t test_vlseg4e16ff_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x4_t test_vlseg4e16ff_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16m1x4_tu(vd, rs1, new_vl, vl); } -vfloat16m2x4_t test_vlseg4e16ff_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x4_t test_vlseg4e16ff_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16m2x4_tu(vd, rs1, new_vl, vl); } -vint16mf4x4_t test_vlseg4e16ff_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x4_t test_vlseg4e16ff_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16mf4x4_tu(vd, rs1, new_vl, vl); } -vint16mf2x4_t test_vlseg4e16ff_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x4_t test_vlseg4e16ff_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16mf2x4_tu(vd, rs1, new_vl, vl); } -vint16m1x4_t test_vlseg4e16ff_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x4_t test_vlseg4e16ff_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_i16m1x4_tu(vd, rs1, new_vl, vl); } -vint16m2x4_t test_vlseg4e16ff_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x4_t test_vlseg4e16ff_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_i16m2x4_tu(vd, rs1, new_vl, vl); } -vuint16mf4x4_t test_vlseg4e16ff_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x4_t test_vlseg4e16ff_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16mf4x4_tu(vd, rs1, new_vl, vl); } -vuint16mf2x4_t test_vlseg4e16ff_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x4_t test_vlseg4e16ff_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16mf2x4_tu(vd, rs1, new_vl, vl); } -vuint16m1x4_t test_vlseg4e16ff_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x4_t test_vlseg4e16ff_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_u16m1x4_tu(vd, rs1, new_vl, vl); } -vuint16m2x4_t test_vlseg4e16ff_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x4_t test_vlseg4e16ff_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_u16m2x4_tu(vd, rs1, new_vl, vl); } -vfloat16mf4x4_t test_vlseg4e16ff_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x4_t test_vlseg4e16ff_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16mf4x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x4_t test_vlseg4e16ff_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x4_t test_vlseg4e16ff_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m1x4_t test_vlseg4e16ff_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x4_t test_vlseg4e16ff_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16m1x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m2x4_t test_vlseg4e16ff_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x4_t test_vlseg4e16ff_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16m2x4_tum(vm, vd, rs1, new_vl, vl); } -vint16mf4x4_t test_vlseg4e16ff_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x4_t test_vlseg4e16ff_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_i16mf4x4_tum(vm, vd, rs1, new_vl, vl); } -vint16mf2x4_t test_vlseg4e16ff_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x4_t test_vlseg4e16ff_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_i16mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vint16m1x4_t test_vlseg4e16ff_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x4_t test_vlseg4e16ff_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16m1x4_tum(vm, vd, rs1, new_vl, vl); } -vint16m2x4_t test_vlseg4e16ff_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x4_t test_vlseg4e16ff_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16m2x4_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf4x4_t test_vlseg4e16ff_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x4_t test_vlseg4e16ff_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16mf4x4_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf2x4_t test_vlseg4e16ff_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x4_t test_vlseg4e16ff_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vuint16m1x4_t test_vlseg4e16ff_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x4_t test_vlseg4e16ff_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16m1x4_tum(vm, vd, rs1, new_vl, vl); } -vuint16m2x4_t test_vlseg4e16ff_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x4_t test_vlseg4e16ff_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16m2x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x4_t test_vlseg4e16ff_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x4_t test_vlseg4e16ff_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16mf4x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x4_t test_vlseg4e16ff_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x4_t test_vlseg4e16ff_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x4_t test_vlseg4e16ff_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x4_t test_vlseg4e16ff_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m2x4_t test_vlseg4e16ff_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x4_t test_vlseg4e16ff_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf4x4_t test_vlseg4e16ff_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x4_t test_vlseg4e16ff_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_i16mf4x4_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf2x4_t test_vlseg4e16ff_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x4_t test_vlseg4e16ff_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_i16mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vint16m1x4_t test_vlseg4e16ff_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x4_t test_vlseg4e16ff_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vint16m2x4_t test_vlseg4e16ff_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x4_t test_vlseg4e16ff_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x4_t test_vlseg4e16ff_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x4_t test_vlseg4e16ff_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16mf4x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x4_t test_vlseg4e16ff_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x4_t test_vlseg4e16ff_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m1x4_t test_vlseg4e16ff_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x4_t test_vlseg4e16ff_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m2x4_t test_vlseg4e16ff_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x4_t test_vlseg4e16ff_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x4_t test_vlseg4e16ff_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x4_t test_vlseg4e16ff_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16mf4x4_mu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x4_t test_vlseg4e16ff_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x4_t test_vlseg4e16ff_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x4_t test_vlseg4e16ff_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x4_t test_vlseg4e16ff_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16m1x4_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m2x4_t test_vlseg4e16ff_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m2x4_t test_vlseg4e16ff_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_f16m2x4_mu(vm, vd, rs1, new_vl, vl); } -vint16mf4x4_t test_vlseg4e16ff_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x4_t test_vlseg4e16ff_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16mf4x4_mu(vm, vd, rs1, new_vl, vl); } -vint16mf2x4_t test_vlseg4e16ff_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x4_t test_vlseg4e16ff_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vint16m1x4_t test_vlseg4e16ff_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x4_t test_vlseg4e16ff_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16m1x4_mu(vm, vd, rs1, new_vl, vl); } -vint16m2x4_t test_vlseg4e16ff_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m2x4_t test_vlseg4e16ff_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_i16m2x4_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x4_t test_vlseg4e16ff_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x4_t test_vlseg4e16ff_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16mf4x4_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x4_t test_vlseg4e16ff_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x4_t test_vlseg4e16ff_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_u16mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vuint16m1x4_t test_vlseg4e16ff_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x4_t test_vlseg4e16ff_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_u16m1x4_mu(vm, vd, rs1, new_vl, vl); } -vuint16m2x4_t test_vlseg4e16ff_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m2x4_t test_vlseg4e16ff_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e16ff_v_u16m2x4_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c index b2de84ad5..4e79801e2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c @@ -6,146 +6,182 @@ #include -vfloat32mf2x4_t test_vlseg4e32_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, size_t vl) { +vfloat32mf2x4_t test_vlseg4e32_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32mf2x4_tu(vd, rs1, vl); } -vfloat32m1x4_t test_vlseg4e32_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, size_t vl) { +vfloat32m1x4_t test_vlseg4e32_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg4e32_v_f32m1x4_tu(vd, rs1, vl); } -vfloat32m2x4_t test_vlseg4e32_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, size_t vl) { +vfloat32m2x4_t test_vlseg4e32_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg4e32_v_f32m2x4_tu(vd, rs1, vl); } -vint32mf2x4_t test_vlseg4e32_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x4_t test_vlseg4e32_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg4e32_v_i32mf2x4_tu(vd, rs1, vl); } -vint32m1x4_t test_vlseg4e32_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, size_t vl) { +vint32m1x4_t test_vlseg4e32_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg4e32_v_i32m1x4_tu(vd, rs1, vl); } -vint32m2x4_t test_vlseg4e32_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, size_t vl) { +vint32m2x4_t test_vlseg4e32_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg4e32_v_i32m2x4_tu(vd, rs1, vl); } -vuint32mf2x4_t test_vlseg4e32_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x4_t test_vlseg4e32_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32mf2x4_tu(vd, rs1, vl); } -vuint32m1x4_t test_vlseg4e32_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x4_t test_vlseg4e32_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg4e32_v_u32m1x4_tu(vd, rs1, vl); } -vuint32m2x4_t test_vlseg4e32_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x4_t test_vlseg4e32_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg4e32_v_u32m2x4_tu(vd, rs1, vl); } -vfloat32mf2x4_t test_vlseg4e32_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, size_t vl) { +vfloat32mf2x4_t test_vlseg4e32_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32mf2x4_tum(vm, vd, rs1, vl); } -vfloat32m1x4_t test_vlseg4e32_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, size_t vl) { +vfloat32m1x4_t test_vlseg4e32_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32m1x4_tum(vm, vd, rs1, vl); } -vfloat32m2x4_t test_vlseg4e32_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, size_t vl) { +vfloat32m2x4_t test_vlseg4e32_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32m2x4_tum(vm, vd, rs1, vl); } -vint32mf2x4_t test_vlseg4e32_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x4_t test_vlseg4e32_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32mf2x4_tum(vm, vd, rs1, vl); } -vint32m1x4_t test_vlseg4e32_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, size_t vl) { +vint32m1x4_t test_vlseg4e32_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32m1x4_tum(vm, vd, rs1, vl); } -vint32m2x4_t test_vlseg4e32_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, size_t vl) { +vint32m2x4_t test_vlseg4e32_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32m2x4_tum(vm, vd, rs1, vl); } -vuint32mf2x4_t test_vlseg4e32_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x4_t test_vlseg4e32_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32mf2x4_tum(vm, vd, rs1, vl); } -vuint32m1x4_t test_vlseg4e32_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x4_t test_vlseg4e32_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32m1x4_tum(vm, vd, rs1, vl); } -vuint32m2x4_t test_vlseg4e32_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x4_t test_vlseg4e32_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32m2x4_tum(vm, vd, rs1, vl); } -vfloat32mf2x4_t test_vlseg4e32_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, size_t vl) { +vfloat32mf2x4_t test_vlseg4e32_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32mf2x4_tumu(vm, vd, rs1, vl); } -vfloat32m1x4_t test_vlseg4e32_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, size_t vl) { +vfloat32m1x4_t test_vlseg4e32_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32m1x4_tumu(vm, vd, rs1, vl); } -vfloat32m2x4_t test_vlseg4e32_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, size_t vl) { +vfloat32m2x4_t test_vlseg4e32_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32m2x4_tumu(vm, vd, rs1, vl); } -vint32mf2x4_t test_vlseg4e32_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x4_t test_vlseg4e32_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32mf2x4_tumu(vm, vd, rs1, vl); } -vint32m1x4_t test_vlseg4e32_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, size_t vl) { +vint32m1x4_t test_vlseg4e32_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32m1x4_tumu(vm, vd, rs1, vl); } -vint32m2x4_t test_vlseg4e32_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, size_t vl) { +vint32m2x4_t test_vlseg4e32_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32m2x4_tumu(vm, vd, rs1, vl); } -vuint32mf2x4_t test_vlseg4e32_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x4_t test_vlseg4e32_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32mf2x4_tumu(vm, vd, rs1, vl); } -vuint32m1x4_t test_vlseg4e32_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x4_t test_vlseg4e32_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32m1x4_tumu(vm, vd, rs1, vl); } -vuint32m2x4_t test_vlseg4e32_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x4_t test_vlseg4e32_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32m2x4_tumu(vm, vd, rs1, vl); } -vfloat32mf2x4_t test_vlseg4e32_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, size_t vl) { +vfloat32mf2x4_t test_vlseg4e32_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32mf2x4_mu(vm, vd, rs1, vl); } -vfloat32m1x4_t test_vlseg4e32_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, size_t vl) { +vfloat32m1x4_t test_vlseg4e32_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32m1x4_mu(vm, vd, rs1, vl); } -vfloat32m2x4_t test_vlseg4e32_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, size_t vl) { +vfloat32m2x4_t test_vlseg4e32_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg4e32_v_f32m2x4_mu(vm, vd, rs1, vl); } -vint32mf2x4_t test_vlseg4e32_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x4_t test_vlseg4e32_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32mf2x4_mu(vm, vd, rs1, vl); } -vint32m1x4_t test_vlseg4e32_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, size_t vl) { +vint32m1x4_t test_vlseg4e32_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32m1x4_mu(vm, vd, rs1, vl); } -vint32m2x4_t test_vlseg4e32_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, size_t vl) { +vint32m2x4_t test_vlseg4e32_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_i32m2x4_mu(vm, vd, rs1, vl); } -vuint32mf2x4_t test_vlseg4e32_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x4_t test_vlseg4e32_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32mf2x4_mu(vm, vd, rs1, vl); } -vuint32m1x4_t test_vlseg4e32_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x4_t test_vlseg4e32_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32m1x4_mu(vm, vd, rs1, vl); } -vuint32m2x4_t test_vlseg4e32_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, size_t vl) { +vuint32m2x4_t test_vlseg4e32_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg4e32_v_u32m2x4_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c index 49326337a..82e9c23df 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c @@ -6,146 +6,218 @@ #include -vfloat32mf2x4_t test_vlseg4e32ff_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x4_t test_vlseg4e32ff_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32mf2x4_tu(vd, rs1, new_vl, vl); } -vfloat32m1x4_t test_vlseg4e32ff_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x4_t test_vlseg4e32ff_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32m1x4_tu(vd, rs1, new_vl, vl); } -vfloat32m2x4_t test_vlseg4e32ff_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x4_t test_vlseg4e32ff_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32m2x4_tu(vd, rs1, new_vl, vl); } -vint32mf2x4_t test_vlseg4e32ff_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x4_t test_vlseg4e32ff_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_i32mf2x4_tu(vd, rs1, new_vl, vl); } -vint32m1x4_t test_vlseg4e32ff_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x4_t test_vlseg4e32ff_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_i32m1x4_tu(vd, rs1, new_vl, vl); } -vint32m2x4_t test_vlseg4e32ff_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x4_t test_vlseg4e32ff_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_i32m2x4_tu(vd, rs1, new_vl, vl); } -vuint32mf2x4_t test_vlseg4e32ff_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x4_t test_vlseg4e32ff_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_u32mf2x4_tu(vd, rs1, new_vl, vl); } -vuint32m1x4_t test_vlseg4e32ff_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x4_t test_vlseg4e32ff_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_u32m1x4_tu(vd, rs1, new_vl, vl); } -vuint32m2x4_t test_vlseg4e32ff_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x4_t test_vlseg4e32ff_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_u32m2x4_tu(vd, rs1, new_vl, vl); } -vfloat32mf2x4_t test_vlseg4e32ff_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x4_t test_vlseg4e32ff_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_f32mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m1x4_t test_vlseg4e32ff_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x4_t test_vlseg4e32ff_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32m1x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m2x4_t test_vlseg4e32ff_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x4_t test_vlseg4e32ff_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32m2x4_tum(vm, vd, rs1, new_vl, vl); } -vint32mf2x4_t test_vlseg4e32ff_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x4_t test_vlseg4e32ff_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_i32mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vint32m1x4_t test_vlseg4e32ff_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x4_t test_vlseg4e32ff_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_i32m1x4_tum(vm, vd, rs1, new_vl, vl); } -vint32m2x4_t test_vlseg4e32ff_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x4_t test_vlseg4e32ff_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_i32m2x4_tum(vm, vd, rs1, new_vl, vl); } -vuint32mf2x4_t test_vlseg4e32ff_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x4_t test_vlseg4e32ff_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_u32mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vuint32m1x4_t test_vlseg4e32ff_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x4_t test_vlseg4e32ff_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_u32m1x4_tum(vm, vd, rs1, new_vl, vl); } -vuint32m2x4_t test_vlseg4e32ff_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x4_t test_vlseg4e32ff_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_u32m2x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x4_t test_vlseg4e32ff_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x4_t test_vlseg4e32ff_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_f32mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x4_t test_vlseg4e32ff_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x4_t test_vlseg4e32ff_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m2x4_t test_vlseg4e32ff_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x4_t test_vlseg4e32ff_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vint32mf2x4_t test_vlseg4e32ff_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x4_t test_vlseg4e32ff_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_i32mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vint32m1x4_t test_vlseg4e32ff_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x4_t test_vlseg4e32ff_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_i32m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vint32m2x4_t test_vlseg4e32ff_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x4_t test_vlseg4e32ff_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_i32m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x4_t test_vlseg4e32ff_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x4_t test_vlseg4e32ff_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_u32mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m1x4_t test_vlseg4e32ff_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x4_t test_vlseg4e32ff_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_u32m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m2x4_t test_vlseg4e32ff_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x4_t test_vlseg4e32ff_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_u32m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x4_t test_vlseg4e32ff_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x4_t test_vlseg4e32ff_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x4_t test_vlseg4e32ff_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x4_t test_vlseg4e32ff_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32m1x4_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m2x4_t test_vlseg4e32ff_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m2x4_t test_vlseg4e32ff_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_f32m2x4_mu(vm, vd, rs1, new_vl, vl); } -vint32mf2x4_t test_vlseg4e32ff_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x4_t test_vlseg4e32ff_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_i32mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vint32m1x4_t test_vlseg4e32ff_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x4_t test_vlseg4e32ff_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_i32m1x4_mu(vm, vd, rs1, new_vl, vl); } -vint32m2x4_t test_vlseg4e32ff_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m2x4_t test_vlseg4e32ff_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_i32m2x4_mu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x4_t test_vlseg4e32ff_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x4_t test_vlseg4e32ff_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e32ff_v_u32mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vuint32m1x4_t test_vlseg4e32ff_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x4_t test_vlseg4e32ff_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_u32m1x4_mu(vm, vd, rs1, new_vl, vl); } -vuint32m2x4_t test_vlseg4e32ff_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m2x4_t test_vlseg4e32ff_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e32ff_v_u32m2x4_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c index 17525d0ba..c7ab055fb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c @@ -6,98 +6,122 @@ #include -vfloat64m1x4_t test_vlseg4e64_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, size_t vl) { +vfloat64m1x4_t test_vlseg4e64_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg4e64_v_f64m1x4_tu(vd, rs1, vl); } -vfloat64m2x4_t test_vlseg4e64_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, size_t vl) { +vfloat64m2x4_t test_vlseg4e64_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg4e64_v_f64m2x4_tu(vd, rs1, vl); } -vint64m1x4_t test_vlseg4e64_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, size_t vl) { +vint64m1x4_t test_vlseg4e64_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg4e64_v_i64m1x4_tu(vd, rs1, vl); } -vint64m2x4_t test_vlseg4e64_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, size_t vl) { +vint64m2x4_t test_vlseg4e64_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg4e64_v_i64m2x4_tu(vd, rs1, vl); } -vuint64m1x4_t test_vlseg4e64_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x4_t test_vlseg4e64_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg4e64_v_u64m1x4_tu(vd, rs1, vl); } -vuint64m2x4_t test_vlseg4e64_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x4_t test_vlseg4e64_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg4e64_v_u64m2x4_tu(vd, rs1, vl); } -vfloat64m1x4_t test_vlseg4e64_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, size_t vl) { +vfloat64m1x4_t test_vlseg4e64_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg4e64_v_f64m1x4_tum(vm, vd, rs1, vl); } -vfloat64m2x4_t test_vlseg4e64_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, size_t vl) { +vfloat64m2x4_t test_vlseg4e64_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg4e64_v_f64m2x4_tum(vm, vd, rs1, vl); } -vint64m1x4_t test_vlseg4e64_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, size_t vl) { +vint64m1x4_t test_vlseg4e64_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_i64m1x4_tum(vm, vd, rs1, vl); } -vint64m2x4_t test_vlseg4e64_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, size_t vl) { +vint64m2x4_t test_vlseg4e64_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_i64m2x4_tum(vm, vd, rs1, vl); } -vuint64m1x4_t test_vlseg4e64_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x4_t test_vlseg4e64_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_u64m1x4_tum(vm, vd, rs1, vl); } -vuint64m2x4_t test_vlseg4e64_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x4_t test_vlseg4e64_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_u64m2x4_tum(vm, vd, rs1, vl); } -vfloat64m1x4_t test_vlseg4e64_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, size_t vl) { +vfloat64m1x4_t test_vlseg4e64_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg4e64_v_f64m1x4_tumu(vm, vd, rs1, vl); } -vfloat64m2x4_t test_vlseg4e64_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, size_t vl) { +vfloat64m2x4_t test_vlseg4e64_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg4e64_v_f64m2x4_tumu(vm, vd, rs1, vl); } -vint64m1x4_t test_vlseg4e64_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, size_t vl) { +vint64m1x4_t test_vlseg4e64_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_i64m1x4_tumu(vm, vd, rs1, vl); } -vint64m2x4_t test_vlseg4e64_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, size_t vl) { +vint64m2x4_t test_vlseg4e64_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_i64m2x4_tumu(vm, vd, rs1, vl); } -vuint64m1x4_t test_vlseg4e64_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x4_t test_vlseg4e64_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_u64m1x4_tumu(vm, vd, rs1, vl); } -vuint64m2x4_t test_vlseg4e64_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x4_t test_vlseg4e64_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_u64m2x4_tumu(vm, vd, rs1, vl); } -vfloat64m1x4_t test_vlseg4e64_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, size_t vl) { +vfloat64m1x4_t test_vlseg4e64_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg4e64_v_f64m1x4_mu(vm, vd, rs1, vl); } -vfloat64m2x4_t test_vlseg4e64_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, size_t vl) { +vfloat64m2x4_t test_vlseg4e64_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg4e64_v_f64m2x4_mu(vm, vd, rs1, vl); } -vint64m1x4_t test_vlseg4e64_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, size_t vl) { +vint64m1x4_t test_vlseg4e64_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_i64m1x4_mu(vm, vd, rs1, vl); } -vint64m2x4_t test_vlseg4e64_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, size_t vl) { +vint64m2x4_t test_vlseg4e64_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_i64m2x4_mu(vm, vd, rs1, vl); } -vuint64m1x4_t test_vlseg4e64_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x4_t test_vlseg4e64_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_u64m1x4_mu(vm, vd, rs1, vl); } -vuint64m2x4_t test_vlseg4e64_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, size_t vl) { +vuint64m2x4_t test_vlseg4e64_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg4e64_v_u64m2x4_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c index 2008337ce..ab4d14078 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c @@ -6,98 +6,144 @@ #include -vfloat64m1x4_t test_vlseg4e64ff_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x4_t test_vlseg4e64ff_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_f64m1x4_tu(vd, rs1, new_vl, vl); } -vfloat64m2x4_t test_vlseg4e64ff_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x4_t test_vlseg4e64ff_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_f64m2x4_tu(vd, rs1, new_vl, vl); } -vint64m1x4_t test_vlseg4e64ff_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x4_t test_vlseg4e64ff_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e64ff_v_i64m1x4_tu(vd, rs1, new_vl, vl); } -vint64m2x4_t test_vlseg4e64ff_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x4_t test_vlseg4e64ff_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e64ff_v_i64m2x4_tu(vd, rs1, new_vl, vl); } -vuint64m1x4_t test_vlseg4e64ff_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x4_t test_vlseg4e64ff_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_u64m1x4_tu(vd, rs1, new_vl, vl); } -vuint64m2x4_t test_vlseg4e64ff_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x4_t test_vlseg4e64ff_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_u64m2x4_tu(vd, rs1, new_vl, vl); } -vfloat64m1x4_t test_vlseg4e64ff_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x4_t test_vlseg4e64ff_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_f64m1x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m2x4_t test_vlseg4e64ff_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x4_t test_vlseg4e64ff_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_f64m2x4_tum(vm, vd, rs1, new_vl, vl); } -vint64m1x4_t test_vlseg4e64ff_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x4_t test_vlseg4e64ff_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_i64m1x4_tum(vm, vd, rs1, new_vl, vl); } -vint64m2x4_t test_vlseg4e64ff_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x4_t test_vlseg4e64ff_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_i64m2x4_tum(vm, vd, rs1, new_vl, vl); } -vuint64m1x4_t test_vlseg4e64ff_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x4_t test_vlseg4e64ff_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e64ff_v_u64m1x4_tum(vm, vd, rs1, new_vl, vl); } -vuint64m2x4_t test_vlseg4e64ff_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x4_t test_vlseg4e64ff_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e64ff_v_u64m2x4_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m1x4_t test_vlseg4e64ff_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x4_t test_vlseg4e64ff_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e64ff_v_f64m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m2x4_t test_vlseg4e64ff_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x4_t test_vlseg4e64ff_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e64ff_v_f64m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vint64m1x4_t test_vlseg4e64ff_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x4_t test_vlseg4e64ff_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_i64m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vint64m2x4_t test_vlseg4e64ff_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x4_t test_vlseg4e64ff_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_i64m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m1x4_t test_vlseg4e64ff_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x4_t test_vlseg4e64ff_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e64ff_v_u64m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m2x4_t test_vlseg4e64ff_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x4_t test_vlseg4e64ff_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e64ff_v_u64m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m1x4_t test_vlseg4e64ff_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x4_t test_vlseg4e64ff_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_f64m1x4_mu(vm, vd, rs1, new_vl, vl); } -vfloat64m2x4_t test_vlseg4e64ff_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m2x4_t test_vlseg4e64ff_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_f64m2x4_mu(vm, vd, rs1, new_vl, vl); } -vint64m1x4_t test_vlseg4e64ff_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x4_t test_vlseg4e64ff_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_i64m1x4_mu(vm, vd, rs1, new_vl, vl); } -vint64m2x4_t test_vlseg4e64ff_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m2x4_t test_vlseg4e64ff_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_i64m2x4_mu(vm, vd, rs1, new_vl, vl); } -vuint64m1x4_t test_vlseg4e64ff_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x4_t test_vlseg4e64ff_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_u64m1x4_mu(vm, vd, rs1, new_vl, vl); } -vuint64m2x4_t test_vlseg4e64ff_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m2x4_t test_vlseg4e64ff_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e64ff_v_u64m2x4_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8.c index 30723cca6..9d61554c4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8.c @@ -5,162 +5,202 @@ #include -vint8mf8x4_t test_vlseg4e8_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x4_t test_vlseg4e8_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_i8mf8x4_tu(vd, rs1, vl); } -vint8mf4x4_t test_vlseg4e8_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x4_t test_vlseg4e8_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_i8mf4x4_tu(vd, rs1, vl); } -vint8mf2x4_t test_vlseg4e8_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x4_t test_vlseg4e8_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_i8mf2x4_tu(vd, rs1, vl); } -vint8m1x4_t test_vlseg4e8_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, size_t vl) { +vint8m1x4_t test_vlseg4e8_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_i8m1x4_tu(vd, rs1, vl); } -vint8m2x4_t test_vlseg4e8_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, size_t vl) { +vint8m2x4_t test_vlseg4e8_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_i8m2x4_tu(vd, rs1, vl); } -vuint8mf8x4_t test_vlseg4e8_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x4_t test_vlseg4e8_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_u8mf8x4_tu(vd, rs1, vl); } -vuint8mf4x4_t test_vlseg4e8_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x4_t test_vlseg4e8_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_u8mf4x4_tu(vd, rs1, vl); } -vuint8mf2x4_t test_vlseg4e8_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x4_t test_vlseg4e8_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_u8mf2x4_tu(vd, rs1, vl); } -vuint8m1x4_t test_vlseg4e8_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x4_t test_vlseg4e8_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_u8m1x4_tu(vd, rs1, vl); } -vuint8m2x4_t test_vlseg4e8_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x4_t test_vlseg4e8_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg4e8_v_u8m2x4_tu(vd, rs1, vl); } -vint8mf8x4_t test_vlseg4e8_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x4_t test_vlseg4e8_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf8x4_tum(vm, vd, rs1, vl); } -vint8mf4x4_t test_vlseg4e8_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x4_t test_vlseg4e8_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf4x4_tum(vm, vd, rs1, vl); } -vint8mf2x4_t test_vlseg4e8_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x4_t test_vlseg4e8_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf2x4_tum(vm, vd, rs1, vl); } -vint8m1x4_t test_vlseg4e8_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, size_t vl) { +vint8m1x4_t test_vlseg4e8_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8m1x4_tum(vm, vd, rs1, vl); } -vint8m2x4_t test_vlseg4e8_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, size_t vl) { +vint8m2x4_t test_vlseg4e8_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8m2x4_tum(vm, vd, rs1, vl); } -vuint8mf8x4_t test_vlseg4e8_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x4_t test_vlseg4e8_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf8x4_tum(vm, vd, rs1, vl); } -vuint8mf4x4_t test_vlseg4e8_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x4_t test_vlseg4e8_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf4x4_tum(vm, vd, rs1, vl); } -vuint8mf2x4_t test_vlseg4e8_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x4_t test_vlseg4e8_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf2x4_tum(vm, vd, rs1, vl); } -vuint8m1x4_t test_vlseg4e8_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x4_t test_vlseg4e8_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8m1x4_tum(vm, vd, rs1, vl); } -vuint8m2x4_t test_vlseg4e8_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x4_t test_vlseg4e8_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8m2x4_tum(vm, vd, rs1, vl); } -vint8mf8x4_t test_vlseg4e8_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x4_t test_vlseg4e8_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf8x4_tumu(vm, vd, rs1, vl); } -vint8mf4x4_t test_vlseg4e8_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x4_t test_vlseg4e8_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf4x4_tumu(vm, vd, rs1, vl); } -vint8mf2x4_t test_vlseg4e8_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x4_t test_vlseg4e8_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf2x4_tumu(vm, vd, rs1, vl); } -vint8m1x4_t test_vlseg4e8_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, size_t vl) { +vint8m1x4_t test_vlseg4e8_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8m1x4_tumu(vm, vd, rs1, vl); } -vint8m2x4_t test_vlseg4e8_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, size_t vl) { +vint8m2x4_t test_vlseg4e8_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8m2x4_tumu(vm, vd, rs1, vl); } -vuint8mf8x4_t test_vlseg4e8_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x4_t test_vlseg4e8_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf8x4_tumu(vm, vd, rs1, vl); } -vuint8mf4x4_t test_vlseg4e8_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x4_t test_vlseg4e8_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf4x4_tumu(vm, vd, rs1, vl); } -vuint8mf2x4_t test_vlseg4e8_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x4_t test_vlseg4e8_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf2x4_tumu(vm, vd, rs1, vl); } -vuint8m1x4_t test_vlseg4e8_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x4_t test_vlseg4e8_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8m1x4_tumu(vm, vd, rs1, vl); } -vuint8m2x4_t test_vlseg4e8_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x4_t test_vlseg4e8_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8m2x4_tumu(vm, vd, rs1, vl); } -vint8mf8x4_t test_vlseg4e8_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x4_t test_vlseg4e8_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf8x4_mu(vm, vd, rs1, vl); } -vint8mf4x4_t test_vlseg4e8_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x4_t test_vlseg4e8_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf4x4_mu(vm, vd, rs1, vl); } -vint8mf2x4_t test_vlseg4e8_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x4_t test_vlseg4e8_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8mf2x4_mu(vm, vd, rs1, vl); } -vint8m1x4_t test_vlseg4e8_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, size_t vl) { +vint8m1x4_t test_vlseg4e8_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8m1x4_mu(vm, vd, rs1, vl); } -vint8m2x4_t test_vlseg4e8_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, size_t vl) { +vint8m2x4_t test_vlseg4e8_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_i8m2x4_mu(vm, vd, rs1, vl); } -vuint8mf8x4_t test_vlseg4e8_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x4_t test_vlseg4e8_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf8x4_mu(vm, vd, rs1, vl); } -vuint8mf4x4_t test_vlseg4e8_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x4_t test_vlseg4e8_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf4x4_mu(vm, vd, rs1, vl); } -vuint8mf2x4_t test_vlseg4e8_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x4_t test_vlseg4e8_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8mf2x4_mu(vm, vd, rs1, vl); } -vuint8m1x4_t test_vlseg4e8_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x4_t test_vlseg4e8_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8m1x4_mu(vm, vd, rs1, vl); } -vuint8m2x4_t test_vlseg4e8_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, size_t vl) { +vuint8m2x4_t test_vlseg4e8_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg4e8_v_u8m2x4_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c index d803d3f98..9adca2196 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c @@ -6,162 +6,232 @@ #include -vint8mf8x4_t test_vlseg4e8ff_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x4_t test_vlseg4e8ff_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_i8mf8x4_tu(vd, rs1, new_vl, vl); } -vint8mf4x4_t test_vlseg4e8ff_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x4_t test_vlseg4e8ff_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_i8mf4x4_tu(vd, rs1, new_vl, vl); } -vint8mf2x4_t test_vlseg4e8ff_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x4_t test_vlseg4e8ff_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_i8mf2x4_tu(vd, rs1, new_vl, vl); } -vint8m1x4_t test_vlseg4e8ff_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x4_t test_vlseg4e8ff_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_i8m1x4_tu(vd, rs1, new_vl, vl); } -vint8m2x4_t test_vlseg4e8ff_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x4_t test_vlseg4e8ff_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_i8m2x4_tu(vd, rs1, new_vl, vl); } -vuint8mf8x4_t test_vlseg4e8ff_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x4_t test_vlseg4e8ff_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_u8mf8x4_tu(vd, rs1, new_vl, vl); } -vuint8mf4x4_t test_vlseg4e8ff_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x4_t test_vlseg4e8ff_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_u8mf4x4_tu(vd, rs1, new_vl, vl); } -vuint8mf2x4_t test_vlseg4e8ff_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x4_t test_vlseg4e8ff_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_u8mf2x4_tu(vd, rs1, new_vl, vl); } -vuint8m1x4_t test_vlseg4e8ff_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x4_t test_vlseg4e8ff_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_u8m1x4_tu(vd, rs1, new_vl, vl); } -vuint8m2x4_t test_vlseg4e8ff_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x4_t test_vlseg4e8ff_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e8ff_v_u8m2x4_tu(vd, rs1, new_vl, vl); } -vint8mf8x4_t test_vlseg4e8ff_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x4_t test_vlseg4e8ff_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf8x4_tum(vm, vd, rs1, new_vl, vl); } -vint8mf4x4_t test_vlseg4e8ff_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x4_t test_vlseg4e8ff_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf4x4_tum(vm, vd, rs1, new_vl, vl); } -vint8mf2x4_t test_vlseg4e8ff_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x4_t test_vlseg4e8ff_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vint8m1x4_t test_vlseg4e8ff_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x4_t test_vlseg4e8ff_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8m1x4_tum(vm, vd, rs1, new_vl, vl); } -vint8m2x4_t test_vlseg4e8ff_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x4_t test_vlseg4e8ff_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8m2x4_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf8x4_t test_vlseg4e8ff_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x4_t test_vlseg4e8ff_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf8x4_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf4x4_t test_vlseg4e8ff_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x4_t test_vlseg4e8ff_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf4x4_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf2x4_t test_vlseg4e8ff_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x4_t test_vlseg4e8ff_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vuint8m1x4_t test_vlseg4e8ff_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x4_t test_vlseg4e8ff_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8m1x4_tum(vm, vd, rs1, new_vl, vl); } -vuint8m2x4_t test_vlseg4e8ff_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x4_t test_vlseg4e8ff_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8m2x4_tum(vm, vd, rs1, new_vl, vl); } -vint8mf8x4_t test_vlseg4e8ff_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x4_t test_vlseg4e8ff_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf8x4_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf4x4_t test_vlseg4e8ff_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x4_t test_vlseg4e8ff_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf4x4_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf2x4_t test_vlseg4e8ff_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x4_t test_vlseg4e8ff_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vint8m1x4_t test_vlseg4e8ff_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x4_t test_vlseg4e8ff_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vint8m2x4_t test_vlseg4e8ff_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x4_t test_vlseg4e8ff_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x4_t test_vlseg4e8ff_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x4_t test_vlseg4e8ff_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf8x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x4_t test_vlseg4e8ff_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x4_t test_vlseg4e8ff_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf4x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x4_t test_vlseg4e8ff_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x4_t test_vlseg4e8ff_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m1x4_t test_vlseg4e8ff_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x4_t test_vlseg4e8ff_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m2x4_t test_vlseg4e8ff_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x4_t test_vlseg4e8ff_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf8x4_t test_vlseg4e8ff_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x4_t test_vlseg4e8ff_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf8x4_mu(vm, vd, rs1, new_vl, vl); } -vint8mf4x4_t test_vlseg4e8ff_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x4_t test_vlseg4e8ff_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf4x4_mu(vm, vd, rs1, new_vl, vl); } -vint8mf2x4_t test_vlseg4e8ff_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x4_t test_vlseg4e8ff_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vint8m1x4_t test_vlseg4e8ff_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x4_t test_vlseg4e8ff_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8m1x4_mu(vm, vd, rs1, new_vl, vl); } -vint8m2x4_t test_vlseg4e8ff_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m2x4_t test_vlseg4e8ff_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_i8m2x4_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x4_t test_vlseg4e8ff_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x4_t test_vlseg4e8ff_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf8x4_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x4_t test_vlseg4e8ff_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x4_t test_vlseg4e8ff_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf4x4_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x4_t test_vlseg4e8ff_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x4_t test_vlseg4e8ff_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vuint8m1x4_t test_vlseg4e8ff_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x4_t test_vlseg4e8ff_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8m1x4_mu(vm, vd, rs1, new_vl, vl); } -vuint8m2x4_t test_vlseg4e8ff_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m2x4_t test_vlseg4e8ff_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg4e8ff_v_u8m2x4_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c index e599e9716..0e69ae53a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c @@ -6,146 +6,182 @@ #include -vfloat16mf4x5_t test_vlseg5e16_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x5_t test_vlseg5e16_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16mf4x5_tu(vd, rs1, vl); } -vfloat16mf2x5_t test_vlseg5e16_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x5_t test_vlseg5e16_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16mf2x5_tu(vd, rs1, vl); } -vfloat16m1x5_t test_vlseg5e16_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x5_t test_vlseg5e16_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16m1x5_tu(vd, rs1, vl); } -vint16mf4x5_t test_vlseg5e16_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x5_t test_vlseg5e16_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg5e16_v_i16mf4x5_tu(vd, rs1, vl); } -vint16mf2x5_t test_vlseg5e16_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x5_t test_vlseg5e16_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg5e16_v_i16mf2x5_tu(vd, rs1, vl); } -vint16m1x5_t test_vlseg5e16_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, size_t vl) { +vint16m1x5_t test_vlseg5e16_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg5e16_v_i16m1x5_tu(vd, rs1, vl); } -vuint16mf4x5_t test_vlseg5e16_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x5_t test_vlseg5e16_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16mf4x5_tu(vd, rs1, vl); } -vuint16mf2x5_t test_vlseg5e16_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x5_t test_vlseg5e16_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16mf2x5_tu(vd, rs1, vl); } -vuint16m1x5_t test_vlseg5e16_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x5_t test_vlseg5e16_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg5e16_v_u16m1x5_tu(vd, rs1, vl); } -vfloat16mf4x5_t test_vlseg5e16_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x5_t test_vlseg5e16_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16mf4x5_tum(vm, vd, rs1, vl); } -vfloat16mf2x5_t test_vlseg5e16_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x5_t test_vlseg5e16_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16mf2x5_tum(vm, vd, rs1, vl); } -vfloat16m1x5_t test_vlseg5e16_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x5_t test_vlseg5e16_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16m1x5_tum(vm, vd, rs1, vl); } -vint16mf4x5_t test_vlseg5e16_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x5_t test_vlseg5e16_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16mf4x5_tum(vm, vd, rs1, vl); } -vint16mf2x5_t test_vlseg5e16_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x5_t test_vlseg5e16_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16mf2x5_tum(vm, vd, rs1, vl); } -vint16m1x5_t test_vlseg5e16_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, size_t vl) { +vint16m1x5_t test_vlseg5e16_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16m1x5_tum(vm, vd, rs1, vl); } -vuint16mf4x5_t test_vlseg5e16_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x5_t test_vlseg5e16_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16mf4x5_tum(vm, vd, rs1, vl); } -vuint16mf2x5_t test_vlseg5e16_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x5_t test_vlseg5e16_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16mf2x5_tum(vm, vd, rs1, vl); } -vuint16m1x5_t test_vlseg5e16_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x5_t test_vlseg5e16_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16m1x5_tum(vm, vd, rs1, vl); } -vfloat16mf4x5_t test_vlseg5e16_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x5_t test_vlseg5e16_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16mf4x5_tumu(vm, vd, rs1, vl); } -vfloat16mf2x5_t test_vlseg5e16_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x5_t test_vlseg5e16_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16mf2x5_tumu(vm, vd, rs1, vl); } -vfloat16m1x5_t test_vlseg5e16_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x5_t test_vlseg5e16_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16m1x5_tumu(vm, vd, rs1, vl); } -vint16mf4x5_t test_vlseg5e16_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x5_t test_vlseg5e16_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16mf4x5_tumu(vm, vd, rs1, vl); } -vint16mf2x5_t test_vlseg5e16_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x5_t test_vlseg5e16_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16mf2x5_tumu(vm, vd, rs1, vl); } -vint16m1x5_t test_vlseg5e16_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, size_t vl) { +vint16m1x5_t test_vlseg5e16_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16m1x5_tumu(vm, vd, rs1, vl); } -vuint16mf4x5_t test_vlseg5e16_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x5_t test_vlseg5e16_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16mf4x5_tumu(vm, vd, rs1, vl); } -vuint16mf2x5_t test_vlseg5e16_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x5_t test_vlseg5e16_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16mf2x5_tumu(vm, vd, rs1, vl); } -vuint16m1x5_t test_vlseg5e16_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x5_t test_vlseg5e16_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16m1x5_tumu(vm, vd, rs1, vl); } -vfloat16mf4x5_t test_vlseg5e16_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x5_t test_vlseg5e16_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16mf4x5_mu(vm, vd, rs1, vl); } -vfloat16mf2x5_t test_vlseg5e16_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x5_t test_vlseg5e16_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16mf2x5_mu(vm, vd, rs1, vl); } -vfloat16m1x5_t test_vlseg5e16_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x5_t test_vlseg5e16_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_f16m1x5_mu(vm, vd, rs1, vl); } -vint16mf4x5_t test_vlseg5e16_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x5_t test_vlseg5e16_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16mf4x5_mu(vm, vd, rs1, vl); } -vint16mf2x5_t test_vlseg5e16_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x5_t test_vlseg5e16_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16mf2x5_mu(vm, vd, rs1, vl); } -vint16m1x5_t test_vlseg5e16_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, size_t vl) { +vint16m1x5_t test_vlseg5e16_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_i16m1x5_mu(vm, vd, rs1, vl); } -vuint16mf4x5_t test_vlseg5e16_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x5_t test_vlseg5e16_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16mf4x5_mu(vm, vd, rs1, vl); } -vuint16mf2x5_t test_vlseg5e16_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x5_t test_vlseg5e16_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16mf2x5_mu(vm, vd, rs1, vl); } -vuint16m1x5_t test_vlseg5e16_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x5_t test_vlseg5e16_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg5e16_v_u16m1x5_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c index 139cd019c..8ec1c2ebd 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c @@ -6,146 +6,221 @@ #include -vfloat16mf4x5_t test_vlseg5e16ff_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x5_t test_vlseg5e16ff_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16mf4x5_tu(vd, rs1, new_vl, vl); } -vfloat16mf2x5_t test_vlseg5e16ff_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x5_t test_vlseg5e16ff_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16mf2x5_tu(vd, rs1, new_vl, vl); } -vfloat16m1x5_t test_vlseg5e16ff_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x5_t test_vlseg5e16ff_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16m1x5_tu(vd, rs1, new_vl, vl); } -vint16mf4x5_t test_vlseg5e16ff_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x5_t test_vlseg5e16ff_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_i16mf4x5_tu(vd, rs1, new_vl, vl); } -vint16mf2x5_t test_vlseg5e16ff_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x5_t test_vlseg5e16ff_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_i16mf2x5_tu(vd, rs1, new_vl, vl); } -vint16m1x5_t test_vlseg5e16ff_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x5_t test_vlseg5e16ff_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_i16m1x5_tu(vd, rs1, new_vl, vl); } -vuint16mf4x5_t test_vlseg5e16ff_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x5_t test_vlseg5e16ff_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16mf4x5_tu(vd, rs1, new_vl, vl); } -vuint16mf2x5_t test_vlseg5e16ff_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x5_t test_vlseg5e16ff_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16mf2x5_tu(vd, rs1, new_vl, vl); } -vuint16m1x5_t test_vlseg5e16ff_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x5_t test_vlseg5e16ff_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_u16m1x5_tu(vd, rs1, new_vl, vl); } -vfloat16mf4x5_t test_vlseg5e16ff_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x5_t test_vlseg5e16ff_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16mf4x5_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x5_t test_vlseg5e16ff_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x5_t test_vlseg5e16ff_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m1x5_t test_vlseg5e16ff_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x5_t test_vlseg5e16ff_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16m1x5_tum(vm, vd, rs1, new_vl, vl); } -vint16mf4x5_t test_vlseg5e16ff_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x5_t test_vlseg5e16ff_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_i16mf4x5_tum(vm, vd, rs1, new_vl, vl); } -vint16mf2x5_t test_vlseg5e16ff_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x5_t test_vlseg5e16ff_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_i16mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vint16m1x5_t test_vlseg5e16ff_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x5_t test_vlseg5e16ff_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_i16m1x5_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf4x5_t test_vlseg5e16ff_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x5_t test_vlseg5e16ff_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16mf4x5_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf2x5_t test_vlseg5e16ff_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x5_t test_vlseg5e16ff_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vuint16m1x5_t test_vlseg5e16ff_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x5_t test_vlseg5e16ff_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16m1x5_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x5_t test_vlseg5e16ff_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x5_t test_vlseg5e16ff_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16mf4x5_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x5_t test_vlseg5e16ff_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x5_t test_vlseg5e16ff_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x5_t test_vlseg5e16ff_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x5_t test_vlseg5e16ff_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf4x5_t test_vlseg5e16ff_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x5_t test_vlseg5e16ff_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_i16mf4x5_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf2x5_t test_vlseg5e16ff_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x5_t test_vlseg5e16ff_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_i16mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vint16m1x5_t test_vlseg5e16ff_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x5_t test_vlseg5e16ff_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_i16m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x5_t test_vlseg5e16ff_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x5_t test_vlseg5e16ff_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16mf4x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x5_t test_vlseg5e16ff_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x5_t test_vlseg5e16ff_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m1x5_t test_vlseg5e16ff_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x5_t test_vlseg5e16ff_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x5_t test_vlseg5e16ff_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x5_t test_vlseg5e16ff_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16mf4x5_mu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x5_t test_vlseg5e16ff_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x5_t test_vlseg5e16ff_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x5_t test_vlseg5e16ff_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x5_t test_vlseg5e16ff_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_f16m1x5_mu(vm, vd, rs1, new_vl, vl); } -vint16mf4x5_t test_vlseg5e16ff_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x5_t test_vlseg5e16ff_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_i16mf4x5_mu(vm, vd, rs1, new_vl, vl); } -vint16mf2x5_t test_vlseg5e16ff_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x5_t test_vlseg5e16ff_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_i16mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vint16m1x5_t test_vlseg5e16ff_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x5_t test_vlseg5e16ff_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_i16m1x5_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x5_t test_vlseg5e16ff_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x5_t test_vlseg5e16ff_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16mf4x5_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x5_t test_vlseg5e16ff_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x5_t test_vlseg5e16ff_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_u16mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vuint16m1x5_t test_vlseg5e16ff_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x5_t test_vlseg5e16ff_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e16ff_v_u16m1x5_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c index 30bd69b28..c9285533e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c @@ -6,98 +6,122 @@ #include -vfloat32mf2x5_t test_vlseg5e32_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, size_t vl) { +vfloat32mf2x5_t test_vlseg5e32_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg5e32_v_f32mf2x5_tu(vd, rs1, vl); } -vfloat32m1x5_t test_vlseg5e32_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, size_t vl) { +vfloat32m1x5_t test_vlseg5e32_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg5e32_v_f32m1x5_tu(vd, rs1, vl); } -vint32mf2x5_t test_vlseg5e32_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x5_t test_vlseg5e32_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg5e32_v_i32mf2x5_tu(vd, rs1, vl); } -vint32m1x5_t test_vlseg5e32_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, size_t vl) { +vint32m1x5_t test_vlseg5e32_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg5e32_v_i32m1x5_tu(vd, rs1, vl); } -vuint32mf2x5_t test_vlseg5e32_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x5_t test_vlseg5e32_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_u32mf2x5_tu(vd, rs1, vl); } -vuint32m1x5_t test_vlseg5e32_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x5_t test_vlseg5e32_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg5e32_v_u32m1x5_tu(vd, rs1, vl); } -vfloat32mf2x5_t test_vlseg5e32_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, size_t vl) { +vfloat32mf2x5_t test_vlseg5e32_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg5e32_v_f32mf2x5_tum(vm, vd, rs1, vl); } -vfloat32m1x5_t test_vlseg5e32_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, size_t vl) { +vfloat32m1x5_t test_vlseg5e32_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg5e32_v_f32m1x5_tum(vm, vd, rs1, vl); } -vint32mf2x5_t test_vlseg5e32_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x5_t test_vlseg5e32_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_i32mf2x5_tum(vm, vd, rs1, vl); } -vint32m1x5_t test_vlseg5e32_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, size_t vl) { +vint32m1x5_t test_vlseg5e32_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_i32m1x5_tum(vm, vd, rs1, vl); } -vuint32mf2x5_t test_vlseg5e32_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x5_t test_vlseg5e32_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_u32mf2x5_tum(vm, vd, rs1, vl); } -vuint32m1x5_t test_vlseg5e32_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x5_t test_vlseg5e32_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_u32m1x5_tum(vm, vd, rs1, vl); } -vfloat32mf2x5_t test_vlseg5e32_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, size_t vl) { +vfloat32mf2x5_t test_vlseg5e32_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg5e32_v_f32mf2x5_tumu(vm, vd, rs1, vl); } -vfloat32m1x5_t test_vlseg5e32_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, size_t vl) { +vfloat32m1x5_t test_vlseg5e32_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg5e32_v_f32m1x5_tumu(vm, vd, rs1, vl); } -vint32mf2x5_t test_vlseg5e32_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x5_t test_vlseg5e32_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_i32mf2x5_tumu(vm, vd, rs1, vl); } -vint32m1x5_t test_vlseg5e32_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, size_t vl) { +vint32m1x5_t test_vlseg5e32_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_i32m1x5_tumu(vm, vd, rs1, vl); } -vuint32mf2x5_t test_vlseg5e32_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x5_t test_vlseg5e32_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_u32mf2x5_tumu(vm, vd, rs1, vl); } -vuint32m1x5_t test_vlseg5e32_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x5_t test_vlseg5e32_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_u32m1x5_tumu(vm, vd, rs1, vl); } -vfloat32mf2x5_t test_vlseg5e32_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, size_t vl) { +vfloat32mf2x5_t test_vlseg5e32_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg5e32_v_f32mf2x5_mu(vm, vd, rs1, vl); } -vfloat32m1x5_t test_vlseg5e32_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, size_t vl) { +vfloat32m1x5_t test_vlseg5e32_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg5e32_v_f32m1x5_mu(vm, vd, rs1, vl); } -vint32mf2x5_t test_vlseg5e32_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x5_t test_vlseg5e32_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_i32mf2x5_mu(vm, vd, rs1, vl); } -vint32m1x5_t test_vlseg5e32_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, size_t vl) { +vint32m1x5_t test_vlseg5e32_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_i32m1x5_mu(vm, vd, rs1, vl); } -vuint32mf2x5_t test_vlseg5e32_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x5_t test_vlseg5e32_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_u32mf2x5_mu(vm, vd, rs1, vl); } -vuint32m1x5_t test_vlseg5e32_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x5_t test_vlseg5e32_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg5e32_v_u32m1x5_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c index 6bc392ac7..64a0ffe92 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c @@ -6,98 +6,147 @@ #include -vfloat32mf2x5_t test_vlseg5e32ff_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x5_t test_vlseg5e32ff_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_f32mf2x5_tu(vd, rs1, new_vl, vl); } -vfloat32m1x5_t test_vlseg5e32ff_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x5_t test_vlseg5e32ff_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_f32m1x5_tu(vd, rs1, new_vl, vl); } -vint32mf2x5_t test_vlseg5e32ff_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x5_t test_vlseg5e32ff_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_i32mf2x5_tu(vd, rs1, new_vl, vl); } -vint32m1x5_t test_vlseg5e32ff_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x5_t test_vlseg5e32ff_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_i32m1x5_tu(vd, rs1, new_vl, vl); } -vuint32mf2x5_t test_vlseg5e32ff_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x5_t test_vlseg5e32ff_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_u32mf2x5_tu(vd, rs1, new_vl, vl); } -vuint32m1x5_t test_vlseg5e32ff_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x5_t test_vlseg5e32ff_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_u32m1x5_tu(vd, rs1, new_vl, vl); } -vfloat32mf2x5_t test_vlseg5e32ff_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x5_t test_vlseg5e32ff_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_f32mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m1x5_t test_vlseg5e32ff_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x5_t test_vlseg5e32ff_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_f32m1x5_tum(vm, vd, rs1, new_vl, vl); } -vint32mf2x5_t test_vlseg5e32ff_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x5_t test_vlseg5e32ff_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_i32mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vint32m1x5_t test_vlseg5e32ff_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x5_t test_vlseg5e32ff_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_i32m1x5_tum(vm, vd, rs1, new_vl, vl); } -vuint32mf2x5_t test_vlseg5e32ff_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x5_t test_vlseg5e32ff_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_u32mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vuint32m1x5_t test_vlseg5e32ff_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x5_t test_vlseg5e32ff_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_u32m1x5_tum(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x5_t test_vlseg5e32ff_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x5_t test_vlseg5e32ff_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_f32mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x5_t test_vlseg5e32ff_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x5_t test_vlseg5e32ff_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_f32m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vint32mf2x5_t test_vlseg5e32ff_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x5_t test_vlseg5e32ff_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_i32mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vint32m1x5_t test_vlseg5e32ff_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x5_t test_vlseg5e32ff_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_i32m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x5_t test_vlseg5e32ff_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x5_t test_vlseg5e32ff_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_u32mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m1x5_t test_vlseg5e32ff_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x5_t test_vlseg5e32ff_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_u32m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x5_t test_vlseg5e32ff_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x5_t test_vlseg5e32ff_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_f32mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x5_t test_vlseg5e32ff_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x5_t test_vlseg5e32ff_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_f32m1x5_mu(vm, vd, rs1, new_vl, vl); } -vint32mf2x5_t test_vlseg5e32ff_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x5_t test_vlseg5e32ff_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_i32mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vint32m1x5_t test_vlseg5e32ff_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x5_t test_vlseg5e32ff_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_i32m1x5_mu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x5_t test_vlseg5e32ff_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x5_t test_vlseg5e32ff_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e32ff_v_u32mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vuint32m1x5_t test_vlseg5e32ff_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x5_t test_vlseg5e32ff_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e32ff_v_u32m1x5_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c index 3b47cf95d..d73abc8c7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c @@ -6,50 +6,62 @@ #include -vfloat64m1x5_t test_vlseg5e64_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, size_t vl) { +vfloat64m1x5_t test_vlseg5e64_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg5e64_v_f64m1x5_tu(vd, rs1, vl); } -vint64m1x5_t test_vlseg5e64_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, size_t vl) { +vint64m1x5_t test_vlseg5e64_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg5e64_v_i64m1x5_tu(vd, rs1, vl); } -vuint64m1x5_t test_vlseg5e64_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x5_t test_vlseg5e64_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg5e64_v_u64m1x5_tu(vd, rs1, vl); } -vfloat64m1x5_t test_vlseg5e64_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, size_t vl) { +vfloat64m1x5_t test_vlseg5e64_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg5e64_v_f64m1x5_tum(vm, vd, rs1, vl); } -vint64m1x5_t test_vlseg5e64_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, size_t vl) { +vint64m1x5_t test_vlseg5e64_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg5e64_v_i64m1x5_tum(vm, vd, rs1, vl); } -vuint64m1x5_t test_vlseg5e64_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x5_t test_vlseg5e64_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg5e64_v_u64m1x5_tum(vm, vd, rs1, vl); } -vfloat64m1x5_t test_vlseg5e64_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, size_t vl) { +vfloat64m1x5_t test_vlseg5e64_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg5e64_v_f64m1x5_tumu(vm, vd, rs1, vl); } -vint64m1x5_t test_vlseg5e64_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, size_t vl) { +vint64m1x5_t test_vlseg5e64_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg5e64_v_i64m1x5_tumu(vm, vd, rs1, vl); } -vuint64m1x5_t test_vlseg5e64_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x5_t test_vlseg5e64_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg5e64_v_u64m1x5_tumu(vm, vd, rs1, vl); } -vfloat64m1x5_t test_vlseg5e64_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, size_t vl) { +vfloat64m1x5_t test_vlseg5e64_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg5e64_v_f64m1x5_mu(vm, vd, rs1, vl); } -vint64m1x5_t test_vlseg5e64_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, size_t vl) { +vint64m1x5_t test_vlseg5e64_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg5e64_v_i64m1x5_mu(vm, vd, rs1, vl); } -vuint64m1x5_t test_vlseg5e64_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x5_t test_vlseg5e64_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg5e64_v_u64m1x5_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c index ffe2a4385..1f0463160 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c @@ -6,50 +6,73 @@ #include -vfloat64m1x5_t test_vlseg5e64ff_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x5_t test_vlseg5e64ff_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e64ff_v_f64m1x5_tu(vd, rs1, new_vl, vl); } -vint64m1x5_t test_vlseg5e64ff_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x5_t test_vlseg5e64ff_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e64ff_v_i64m1x5_tu(vd, rs1, new_vl, vl); } -vuint64m1x5_t test_vlseg5e64ff_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x5_t test_vlseg5e64ff_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e64ff_v_u64m1x5_tu(vd, rs1, new_vl, vl); } -vfloat64m1x5_t test_vlseg5e64ff_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x5_t test_vlseg5e64ff_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e64ff_v_f64m1x5_tum(vm, vd, rs1, new_vl, vl); } -vint64m1x5_t test_vlseg5e64ff_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x5_t test_vlseg5e64ff_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e64ff_v_i64m1x5_tum(vm, vd, rs1, new_vl, vl); } -vuint64m1x5_t test_vlseg5e64ff_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x5_t test_vlseg5e64ff_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e64ff_v_u64m1x5_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m1x5_t test_vlseg5e64ff_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x5_t test_vlseg5e64ff_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e64ff_v_f64m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vint64m1x5_t test_vlseg5e64ff_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x5_t test_vlseg5e64ff_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e64ff_v_i64m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m1x5_t test_vlseg5e64ff_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x5_t test_vlseg5e64ff_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e64ff_v_u64m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m1x5_t test_vlseg5e64ff_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x5_t test_vlseg5e64ff_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e64ff_v_f64m1x5_mu(vm, vd, rs1, new_vl, vl); } -vint64m1x5_t test_vlseg5e64ff_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x5_t test_vlseg5e64ff_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e64ff_v_i64m1x5_mu(vm, vd, rs1, new_vl, vl); } -vuint64m1x5_t test_vlseg5e64ff_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x5_t test_vlseg5e64ff_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e64ff_v_u64m1x5_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8.c index df5401ab9..c1dd72f72 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8.c @@ -5,130 +5,162 @@ #include -vint8mf8x5_t test_vlseg5e8_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x5_t test_vlseg5e8_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg5e8_v_i8mf8x5_tu(vd, rs1, vl); } -vint8mf4x5_t test_vlseg5e8_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x5_t test_vlseg5e8_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg5e8_v_i8mf4x5_tu(vd, rs1, vl); } -vint8mf2x5_t test_vlseg5e8_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x5_t test_vlseg5e8_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg5e8_v_i8mf2x5_tu(vd, rs1, vl); } -vint8m1x5_t test_vlseg5e8_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, size_t vl) { +vint8m1x5_t test_vlseg5e8_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg5e8_v_i8m1x5_tu(vd, rs1, vl); } -vuint8mf8x5_t test_vlseg5e8_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x5_t test_vlseg5e8_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg5e8_v_u8mf8x5_tu(vd, rs1, vl); } -vuint8mf4x5_t test_vlseg5e8_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x5_t test_vlseg5e8_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg5e8_v_u8mf4x5_tu(vd, rs1, vl); } -vuint8mf2x5_t test_vlseg5e8_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x5_t test_vlseg5e8_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg5e8_v_u8mf2x5_tu(vd, rs1, vl); } -vuint8m1x5_t test_vlseg5e8_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x5_t test_vlseg5e8_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg5e8_v_u8m1x5_tu(vd, rs1, vl); } -vint8mf8x5_t test_vlseg5e8_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x5_t test_vlseg5e8_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf8x5_tum(vm, vd, rs1, vl); } -vint8mf4x5_t test_vlseg5e8_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x5_t test_vlseg5e8_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf4x5_tum(vm, vd, rs1, vl); } -vint8mf2x5_t test_vlseg5e8_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x5_t test_vlseg5e8_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf2x5_tum(vm, vd, rs1, vl); } -vint8m1x5_t test_vlseg5e8_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, size_t vl) { +vint8m1x5_t test_vlseg5e8_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8m1x5_tum(vm, vd, rs1, vl); } -vuint8mf8x5_t test_vlseg5e8_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x5_t test_vlseg5e8_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf8x5_tum(vm, vd, rs1, vl); } -vuint8mf4x5_t test_vlseg5e8_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x5_t test_vlseg5e8_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf4x5_tum(vm, vd, rs1, vl); } -vuint8mf2x5_t test_vlseg5e8_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x5_t test_vlseg5e8_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf2x5_tum(vm, vd, rs1, vl); } -vuint8m1x5_t test_vlseg5e8_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x5_t test_vlseg5e8_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8m1x5_tum(vm, vd, rs1, vl); } -vint8mf8x5_t test_vlseg5e8_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x5_t test_vlseg5e8_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf8x5_tumu(vm, vd, rs1, vl); } -vint8mf4x5_t test_vlseg5e8_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x5_t test_vlseg5e8_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf4x5_tumu(vm, vd, rs1, vl); } -vint8mf2x5_t test_vlseg5e8_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x5_t test_vlseg5e8_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf2x5_tumu(vm, vd, rs1, vl); } -vint8m1x5_t test_vlseg5e8_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, size_t vl) { +vint8m1x5_t test_vlseg5e8_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8m1x5_tumu(vm, vd, rs1, vl); } -vuint8mf8x5_t test_vlseg5e8_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x5_t test_vlseg5e8_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf8x5_tumu(vm, vd, rs1, vl); } -vuint8mf4x5_t test_vlseg5e8_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x5_t test_vlseg5e8_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf4x5_tumu(vm, vd, rs1, vl); } -vuint8mf2x5_t test_vlseg5e8_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x5_t test_vlseg5e8_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf2x5_tumu(vm, vd, rs1, vl); } -vuint8m1x5_t test_vlseg5e8_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x5_t test_vlseg5e8_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8m1x5_tumu(vm, vd, rs1, vl); } -vint8mf8x5_t test_vlseg5e8_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x5_t test_vlseg5e8_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf8x5_mu(vm, vd, rs1, vl); } -vint8mf4x5_t test_vlseg5e8_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x5_t test_vlseg5e8_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf4x5_mu(vm, vd, rs1, vl); } -vint8mf2x5_t test_vlseg5e8_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x5_t test_vlseg5e8_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8mf2x5_mu(vm, vd, rs1, vl); } -vint8m1x5_t test_vlseg5e8_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, size_t vl) { +vint8m1x5_t test_vlseg5e8_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_i8m1x5_mu(vm, vd, rs1, vl); } -vuint8mf8x5_t test_vlseg5e8_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x5_t test_vlseg5e8_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf8x5_mu(vm, vd, rs1, vl); } -vuint8mf4x5_t test_vlseg5e8_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x5_t test_vlseg5e8_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf4x5_mu(vm, vd, rs1, vl); } -vuint8mf2x5_t test_vlseg5e8_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x5_t test_vlseg5e8_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8mf2x5_mu(vm, vd, rs1, vl); } -vuint8m1x5_t test_vlseg5e8_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x5_t test_vlseg5e8_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg5e8_v_u8m1x5_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c index 82795a14b..907ed8482 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c @@ -6,130 +6,186 @@ #include -vint8mf8x5_t test_vlseg5e8ff_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x5_t test_vlseg5e8ff_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e8ff_v_i8mf8x5_tu(vd, rs1, new_vl, vl); } -vint8mf4x5_t test_vlseg5e8ff_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x5_t test_vlseg5e8ff_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e8ff_v_i8mf4x5_tu(vd, rs1, new_vl, vl); } -vint8mf2x5_t test_vlseg5e8ff_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x5_t test_vlseg5e8ff_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e8ff_v_i8mf2x5_tu(vd, rs1, new_vl, vl); } -vint8m1x5_t test_vlseg5e8ff_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x5_t test_vlseg5e8ff_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e8ff_v_i8m1x5_tu(vd, rs1, new_vl, vl); } -vuint8mf8x5_t test_vlseg5e8ff_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x5_t test_vlseg5e8ff_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e8ff_v_u8mf8x5_tu(vd, rs1, new_vl, vl); } -vuint8mf4x5_t test_vlseg5e8ff_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x5_t test_vlseg5e8ff_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e8ff_v_u8mf4x5_tu(vd, rs1, new_vl, vl); } -vuint8mf2x5_t test_vlseg5e8ff_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x5_t test_vlseg5e8ff_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e8ff_v_u8mf2x5_tu(vd, rs1, new_vl, vl); } -vuint8m1x5_t test_vlseg5e8ff_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x5_t test_vlseg5e8ff_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e8ff_v_u8m1x5_tu(vd, rs1, new_vl, vl); } -vint8mf8x5_t test_vlseg5e8ff_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x5_t test_vlseg5e8ff_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf8x5_tum(vm, vd, rs1, new_vl, vl); } -vint8mf4x5_t test_vlseg5e8ff_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x5_t test_vlseg5e8ff_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf4x5_tum(vm, vd, rs1, new_vl, vl); } -vint8mf2x5_t test_vlseg5e8ff_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x5_t test_vlseg5e8ff_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vint8m1x5_t test_vlseg5e8ff_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x5_t test_vlseg5e8ff_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8m1x5_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf8x5_t test_vlseg5e8ff_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x5_t test_vlseg5e8ff_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf8x5_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf4x5_t test_vlseg5e8ff_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x5_t test_vlseg5e8ff_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf4x5_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf2x5_t test_vlseg5e8ff_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x5_t test_vlseg5e8ff_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vuint8m1x5_t test_vlseg5e8ff_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x5_t test_vlseg5e8ff_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8m1x5_tum(vm, vd, rs1, new_vl, vl); } -vint8mf8x5_t test_vlseg5e8ff_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x5_t test_vlseg5e8ff_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf8x5_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf4x5_t test_vlseg5e8ff_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x5_t test_vlseg5e8ff_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf4x5_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf2x5_t test_vlseg5e8ff_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x5_t test_vlseg5e8ff_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vint8m1x5_t test_vlseg5e8ff_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x5_t test_vlseg5e8ff_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x5_t test_vlseg5e8ff_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x5_t test_vlseg5e8ff_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf8x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x5_t test_vlseg5e8ff_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x5_t test_vlseg5e8ff_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf4x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x5_t test_vlseg5e8ff_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x5_t test_vlseg5e8ff_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m1x5_t test_vlseg5e8ff_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x5_t test_vlseg5e8ff_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf8x5_t test_vlseg5e8ff_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x5_t test_vlseg5e8ff_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf8x5_mu(vm, vd, rs1, new_vl, vl); } -vint8mf4x5_t test_vlseg5e8ff_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x5_t test_vlseg5e8ff_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf4x5_mu(vm, vd, rs1, new_vl, vl); } -vint8mf2x5_t test_vlseg5e8ff_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x5_t test_vlseg5e8ff_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vint8m1x5_t test_vlseg5e8ff_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x5_t test_vlseg5e8ff_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_i8m1x5_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x5_t test_vlseg5e8ff_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x5_t test_vlseg5e8ff_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf8x5_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x5_t test_vlseg5e8ff_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x5_t test_vlseg5e8ff_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf4x5_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x5_t test_vlseg5e8ff_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x5_t test_vlseg5e8ff_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vuint8m1x5_t test_vlseg5e8ff_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x5_t test_vlseg5e8ff_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg5e8ff_v_u8m1x5_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c index 83a8e381b..6fb72e1a7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c @@ -6,146 +6,182 @@ #include -vfloat16mf4x6_t test_vlseg6e16_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x6_t test_vlseg6e16_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16mf4x6_tu(vd, rs1, vl); } -vfloat16mf2x6_t test_vlseg6e16_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x6_t test_vlseg6e16_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16mf2x6_tu(vd, rs1, vl); } -vfloat16m1x6_t test_vlseg6e16_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x6_t test_vlseg6e16_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16m1x6_tu(vd, rs1, vl); } -vint16mf4x6_t test_vlseg6e16_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x6_t test_vlseg6e16_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg6e16_v_i16mf4x6_tu(vd, rs1, vl); } -vint16mf2x6_t test_vlseg6e16_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x6_t test_vlseg6e16_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg6e16_v_i16mf2x6_tu(vd, rs1, vl); } -vint16m1x6_t test_vlseg6e16_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, size_t vl) { +vint16m1x6_t test_vlseg6e16_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg6e16_v_i16m1x6_tu(vd, rs1, vl); } -vuint16mf4x6_t test_vlseg6e16_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x6_t test_vlseg6e16_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16mf4x6_tu(vd, rs1, vl); } -vuint16mf2x6_t test_vlseg6e16_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x6_t test_vlseg6e16_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16mf2x6_tu(vd, rs1, vl); } -vuint16m1x6_t test_vlseg6e16_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x6_t test_vlseg6e16_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg6e16_v_u16m1x6_tu(vd, rs1, vl); } -vfloat16mf4x6_t test_vlseg6e16_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x6_t test_vlseg6e16_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16mf4x6_tum(vm, vd, rs1, vl); } -vfloat16mf2x6_t test_vlseg6e16_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x6_t test_vlseg6e16_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16mf2x6_tum(vm, vd, rs1, vl); } -vfloat16m1x6_t test_vlseg6e16_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x6_t test_vlseg6e16_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16m1x6_tum(vm, vd, rs1, vl); } -vint16mf4x6_t test_vlseg6e16_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x6_t test_vlseg6e16_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16mf4x6_tum(vm, vd, rs1, vl); } -vint16mf2x6_t test_vlseg6e16_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x6_t test_vlseg6e16_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16mf2x6_tum(vm, vd, rs1, vl); } -vint16m1x6_t test_vlseg6e16_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, size_t vl) { +vint16m1x6_t test_vlseg6e16_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16m1x6_tum(vm, vd, rs1, vl); } -vuint16mf4x6_t test_vlseg6e16_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x6_t test_vlseg6e16_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16mf4x6_tum(vm, vd, rs1, vl); } -vuint16mf2x6_t test_vlseg6e16_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x6_t test_vlseg6e16_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16mf2x6_tum(vm, vd, rs1, vl); } -vuint16m1x6_t test_vlseg6e16_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x6_t test_vlseg6e16_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16m1x6_tum(vm, vd, rs1, vl); } -vfloat16mf4x6_t test_vlseg6e16_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x6_t test_vlseg6e16_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16mf4x6_tumu(vm, vd, rs1, vl); } -vfloat16mf2x6_t test_vlseg6e16_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x6_t test_vlseg6e16_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16mf2x6_tumu(vm, vd, rs1, vl); } -vfloat16m1x6_t test_vlseg6e16_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x6_t test_vlseg6e16_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16m1x6_tumu(vm, vd, rs1, vl); } -vint16mf4x6_t test_vlseg6e16_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x6_t test_vlseg6e16_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16mf4x6_tumu(vm, vd, rs1, vl); } -vint16mf2x6_t test_vlseg6e16_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x6_t test_vlseg6e16_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16mf2x6_tumu(vm, vd, rs1, vl); } -vint16m1x6_t test_vlseg6e16_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, size_t vl) { +vint16m1x6_t test_vlseg6e16_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16m1x6_tumu(vm, vd, rs1, vl); } -vuint16mf4x6_t test_vlseg6e16_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x6_t test_vlseg6e16_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16mf4x6_tumu(vm, vd, rs1, vl); } -vuint16mf2x6_t test_vlseg6e16_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x6_t test_vlseg6e16_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16mf2x6_tumu(vm, vd, rs1, vl); } -vuint16m1x6_t test_vlseg6e16_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x6_t test_vlseg6e16_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16m1x6_tumu(vm, vd, rs1, vl); } -vfloat16mf4x6_t test_vlseg6e16_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x6_t test_vlseg6e16_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16mf4x6_mu(vm, vd, rs1, vl); } -vfloat16mf2x6_t test_vlseg6e16_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x6_t test_vlseg6e16_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16mf2x6_mu(vm, vd, rs1, vl); } -vfloat16m1x6_t test_vlseg6e16_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x6_t test_vlseg6e16_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_f16m1x6_mu(vm, vd, rs1, vl); } -vint16mf4x6_t test_vlseg6e16_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x6_t test_vlseg6e16_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16mf4x6_mu(vm, vd, rs1, vl); } -vint16mf2x6_t test_vlseg6e16_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x6_t test_vlseg6e16_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16mf2x6_mu(vm, vd, rs1, vl); } -vint16m1x6_t test_vlseg6e16_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, size_t vl) { +vint16m1x6_t test_vlseg6e16_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_i16m1x6_mu(vm, vd, rs1, vl); } -vuint16mf4x6_t test_vlseg6e16_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x6_t test_vlseg6e16_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16mf4x6_mu(vm, vd, rs1, vl); } -vuint16mf2x6_t test_vlseg6e16_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x6_t test_vlseg6e16_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16mf2x6_mu(vm, vd, rs1, vl); } -vuint16m1x6_t test_vlseg6e16_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x6_t test_vlseg6e16_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg6e16_v_u16m1x6_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c index e67387d5b..fb9222849 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c @@ -6,146 +6,221 @@ #include -vfloat16mf4x6_t test_vlseg6e16ff_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x6_t test_vlseg6e16ff_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16mf4x6_tu(vd, rs1, new_vl, vl); } -vfloat16mf2x6_t test_vlseg6e16ff_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x6_t test_vlseg6e16ff_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16mf2x6_tu(vd, rs1, new_vl, vl); } -vfloat16m1x6_t test_vlseg6e16ff_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x6_t test_vlseg6e16ff_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16m1x6_tu(vd, rs1, new_vl, vl); } -vint16mf4x6_t test_vlseg6e16ff_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x6_t test_vlseg6e16ff_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_i16mf4x6_tu(vd, rs1, new_vl, vl); } -vint16mf2x6_t test_vlseg6e16ff_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x6_t test_vlseg6e16ff_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_i16mf2x6_tu(vd, rs1, new_vl, vl); } -vint16m1x6_t test_vlseg6e16ff_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x6_t test_vlseg6e16ff_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_i16m1x6_tu(vd, rs1, new_vl, vl); } -vuint16mf4x6_t test_vlseg6e16ff_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x6_t test_vlseg6e16ff_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16mf4x6_tu(vd, rs1, new_vl, vl); } -vuint16mf2x6_t test_vlseg6e16ff_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x6_t test_vlseg6e16ff_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16mf2x6_tu(vd, rs1, new_vl, vl); } -vuint16m1x6_t test_vlseg6e16ff_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x6_t test_vlseg6e16ff_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_u16m1x6_tu(vd, rs1, new_vl, vl); } -vfloat16mf4x6_t test_vlseg6e16ff_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x6_t test_vlseg6e16ff_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16mf4x6_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x6_t test_vlseg6e16ff_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x6_t test_vlseg6e16ff_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m1x6_t test_vlseg6e16ff_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x6_t test_vlseg6e16ff_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16m1x6_tum(vm, vd, rs1, new_vl, vl); } -vint16mf4x6_t test_vlseg6e16ff_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x6_t test_vlseg6e16ff_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_i16mf4x6_tum(vm, vd, rs1, new_vl, vl); } -vint16mf2x6_t test_vlseg6e16ff_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x6_t test_vlseg6e16ff_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_i16mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vint16m1x6_t test_vlseg6e16ff_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x6_t test_vlseg6e16ff_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_i16m1x6_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf4x6_t test_vlseg6e16ff_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x6_t test_vlseg6e16ff_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16mf4x6_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf2x6_t test_vlseg6e16ff_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x6_t test_vlseg6e16ff_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vuint16m1x6_t test_vlseg6e16ff_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x6_t test_vlseg6e16ff_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16m1x6_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x6_t test_vlseg6e16ff_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x6_t test_vlseg6e16ff_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16mf4x6_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x6_t test_vlseg6e16ff_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x6_t test_vlseg6e16ff_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x6_t test_vlseg6e16ff_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x6_t test_vlseg6e16ff_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf4x6_t test_vlseg6e16ff_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x6_t test_vlseg6e16ff_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_i16mf4x6_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf2x6_t test_vlseg6e16ff_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x6_t test_vlseg6e16ff_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_i16mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vint16m1x6_t test_vlseg6e16ff_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x6_t test_vlseg6e16ff_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_i16m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x6_t test_vlseg6e16ff_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x6_t test_vlseg6e16ff_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16mf4x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x6_t test_vlseg6e16ff_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x6_t test_vlseg6e16ff_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m1x6_t test_vlseg6e16ff_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x6_t test_vlseg6e16ff_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x6_t test_vlseg6e16ff_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x6_t test_vlseg6e16ff_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16mf4x6_mu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x6_t test_vlseg6e16ff_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x6_t test_vlseg6e16ff_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x6_t test_vlseg6e16ff_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x6_t test_vlseg6e16ff_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_f16m1x6_mu(vm, vd, rs1, new_vl, vl); } -vint16mf4x6_t test_vlseg6e16ff_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x6_t test_vlseg6e16ff_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_i16mf4x6_mu(vm, vd, rs1, new_vl, vl); } -vint16mf2x6_t test_vlseg6e16ff_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x6_t test_vlseg6e16ff_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_i16mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vint16m1x6_t test_vlseg6e16ff_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x6_t test_vlseg6e16ff_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_i16m1x6_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x6_t test_vlseg6e16ff_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x6_t test_vlseg6e16ff_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16mf4x6_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x6_t test_vlseg6e16ff_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x6_t test_vlseg6e16ff_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_u16mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vuint16m1x6_t test_vlseg6e16ff_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x6_t test_vlseg6e16ff_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e16ff_v_u16m1x6_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c index 1f30114f5..573299c38 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c @@ -6,98 +6,122 @@ #include -vfloat32mf2x6_t test_vlseg6e32_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, size_t vl) { +vfloat32mf2x6_t test_vlseg6e32_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg6e32_v_f32mf2x6_tu(vd, rs1, vl); } -vfloat32m1x6_t test_vlseg6e32_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, size_t vl) { +vfloat32m1x6_t test_vlseg6e32_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg6e32_v_f32m1x6_tu(vd, rs1, vl); } -vint32mf2x6_t test_vlseg6e32_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x6_t test_vlseg6e32_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg6e32_v_i32mf2x6_tu(vd, rs1, vl); } -vint32m1x6_t test_vlseg6e32_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, size_t vl) { +vint32m1x6_t test_vlseg6e32_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg6e32_v_i32m1x6_tu(vd, rs1, vl); } -vuint32mf2x6_t test_vlseg6e32_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x6_t test_vlseg6e32_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_u32mf2x6_tu(vd, rs1, vl); } -vuint32m1x6_t test_vlseg6e32_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x6_t test_vlseg6e32_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg6e32_v_u32m1x6_tu(vd, rs1, vl); } -vfloat32mf2x6_t test_vlseg6e32_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, size_t vl) { +vfloat32mf2x6_t test_vlseg6e32_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg6e32_v_f32mf2x6_tum(vm, vd, rs1, vl); } -vfloat32m1x6_t test_vlseg6e32_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, size_t vl) { +vfloat32m1x6_t test_vlseg6e32_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg6e32_v_f32m1x6_tum(vm, vd, rs1, vl); } -vint32mf2x6_t test_vlseg6e32_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x6_t test_vlseg6e32_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_i32mf2x6_tum(vm, vd, rs1, vl); } -vint32m1x6_t test_vlseg6e32_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, size_t vl) { +vint32m1x6_t test_vlseg6e32_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_i32m1x6_tum(vm, vd, rs1, vl); } -vuint32mf2x6_t test_vlseg6e32_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x6_t test_vlseg6e32_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_u32mf2x6_tum(vm, vd, rs1, vl); } -vuint32m1x6_t test_vlseg6e32_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x6_t test_vlseg6e32_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_u32m1x6_tum(vm, vd, rs1, vl); } -vfloat32mf2x6_t test_vlseg6e32_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, size_t vl) { +vfloat32mf2x6_t test_vlseg6e32_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg6e32_v_f32mf2x6_tumu(vm, vd, rs1, vl); } -vfloat32m1x6_t test_vlseg6e32_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, size_t vl) { +vfloat32m1x6_t test_vlseg6e32_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg6e32_v_f32m1x6_tumu(vm, vd, rs1, vl); } -vint32mf2x6_t test_vlseg6e32_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x6_t test_vlseg6e32_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_i32mf2x6_tumu(vm, vd, rs1, vl); } -vint32m1x6_t test_vlseg6e32_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, size_t vl) { +vint32m1x6_t test_vlseg6e32_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_i32m1x6_tumu(vm, vd, rs1, vl); } -vuint32mf2x6_t test_vlseg6e32_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x6_t test_vlseg6e32_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_u32mf2x6_tumu(vm, vd, rs1, vl); } -vuint32m1x6_t test_vlseg6e32_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x6_t test_vlseg6e32_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_u32m1x6_tumu(vm, vd, rs1, vl); } -vfloat32mf2x6_t test_vlseg6e32_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, size_t vl) { +vfloat32mf2x6_t test_vlseg6e32_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg6e32_v_f32mf2x6_mu(vm, vd, rs1, vl); } -vfloat32m1x6_t test_vlseg6e32_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, size_t vl) { +vfloat32m1x6_t test_vlseg6e32_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg6e32_v_f32m1x6_mu(vm, vd, rs1, vl); } -vint32mf2x6_t test_vlseg6e32_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x6_t test_vlseg6e32_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_i32mf2x6_mu(vm, vd, rs1, vl); } -vint32m1x6_t test_vlseg6e32_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, size_t vl) { +vint32m1x6_t test_vlseg6e32_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_i32m1x6_mu(vm, vd, rs1, vl); } -vuint32mf2x6_t test_vlseg6e32_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x6_t test_vlseg6e32_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_u32mf2x6_mu(vm, vd, rs1, vl); } -vuint32m1x6_t test_vlseg6e32_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x6_t test_vlseg6e32_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg6e32_v_u32m1x6_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c index 3a6e0ba3e..7579ff78f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c @@ -6,98 +6,147 @@ #include -vfloat32mf2x6_t test_vlseg6e32ff_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x6_t test_vlseg6e32ff_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_f32mf2x6_tu(vd, rs1, new_vl, vl); } -vfloat32m1x6_t test_vlseg6e32ff_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x6_t test_vlseg6e32ff_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_f32m1x6_tu(vd, rs1, new_vl, vl); } -vint32mf2x6_t test_vlseg6e32ff_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x6_t test_vlseg6e32ff_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_i32mf2x6_tu(vd, rs1, new_vl, vl); } -vint32m1x6_t test_vlseg6e32ff_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x6_t test_vlseg6e32ff_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_i32m1x6_tu(vd, rs1, new_vl, vl); } -vuint32mf2x6_t test_vlseg6e32ff_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x6_t test_vlseg6e32ff_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_u32mf2x6_tu(vd, rs1, new_vl, vl); } -vuint32m1x6_t test_vlseg6e32ff_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x6_t test_vlseg6e32ff_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_u32m1x6_tu(vd, rs1, new_vl, vl); } -vfloat32mf2x6_t test_vlseg6e32ff_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x6_t test_vlseg6e32ff_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_f32mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m1x6_t test_vlseg6e32ff_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x6_t test_vlseg6e32ff_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_f32m1x6_tum(vm, vd, rs1, new_vl, vl); } -vint32mf2x6_t test_vlseg6e32ff_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x6_t test_vlseg6e32ff_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_i32mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vint32m1x6_t test_vlseg6e32ff_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x6_t test_vlseg6e32ff_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_i32m1x6_tum(vm, vd, rs1, new_vl, vl); } -vuint32mf2x6_t test_vlseg6e32ff_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x6_t test_vlseg6e32ff_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_u32mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vuint32m1x6_t test_vlseg6e32ff_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x6_t test_vlseg6e32ff_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_u32m1x6_tum(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x6_t test_vlseg6e32ff_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x6_t test_vlseg6e32ff_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_f32mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x6_t test_vlseg6e32ff_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x6_t test_vlseg6e32ff_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_f32m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vint32mf2x6_t test_vlseg6e32ff_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x6_t test_vlseg6e32ff_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_i32mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vint32m1x6_t test_vlseg6e32ff_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x6_t test_vlseg6e32ff_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_i32m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x6_t test_vlseg6e32ff_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x6_t test_vlseg6e32ff_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_u32mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m1x6_t test_vlseg6e32ff_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x6_t test_vlseg6e32ff_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_u32m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x6_t test_vlseg6e32ff_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x6_t test_vlseg6e32ff_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_f32mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x6_t test_vlseg6e32ff_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x6_t test_vlseg6e32ff_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_f32m1x6_mu(vm, vd, rs1, new_vl, vl); } -vint32mf2x6_t test_vlseg6e32ff_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x6_t test_vlseg6e32ff_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_i32mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vint32m1x6_t test_vlseg6e32ff_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x6_t test_vlseg6e32ff_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_i32m1x6_mu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x6_t test_vlseg6e32ff_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x6_t test_vlseg6e32ff_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e32ff_v_u32mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vuint32m1x6_t test_vlseg6e32ff_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x6_t test_vlseg6e32ff_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e32ff_v_u32m1x6_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c index 54e1ab8df..f4f88cb18 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c @@ -6,50 +6,62 @@ #include -vfloat64m1x6_t test_vlseg6e64_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, size_t vl) { +vfloat64m1x6_t test_vlseg6e64_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg6e64_v_f64m1x6_tu(vd, rs1, vl); } -vint64m1x6_t test_vlseg6e64_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, size_t vl) { +vint64m1x6_t test_vlseg6e64_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg6e64_v_i64m1x6_tu(vd, rs1, vl); } -vuint64m1x6_t test_vlseg6e64_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x6_t test_vlseg6e64_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg6e64_v_u64m1x6_tu(vd, rs1, vl); } -vfloat64m1x6_t test_vlseg6e64_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, size_t vl) { +vfloat64m1x6_t test_vlseg6e64_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg6e64_v_f64m1x6_tum(vm, vd, rs1, vl); } -vint64m1x6_t test_vlseg6e64_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, size_t vl) { +vint64m1x6_t test_vlseg6e64_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg6e64_v_i64m1x6_tum(vm, vd, rs1, vl); } -vuint64m1x6_t test_vlseg6e64_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x6_t test_vlseg6e64_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg6e64_v_u64m1x6_tum(vm, vd, rs1, vl); } -vfloat64m1x6_t test_vlseg6e64_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, size_t vl) { +vfloat64m1x6_t test_vlseg6e64_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg6e64_v_f64m1x6_tumu(vm, vd, rs1, vl); } -vint64m1x6_t test_vlseg6e64_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, size_t vl) { +vint64m1x6_t test_vlseg6e64_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg6e64_v_i64m1x6_tumu(vm, vd, rs1, vl); } -vuint64m1x6_t test_vlseg6e64_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x6_t test_vlseg6e64_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg6e64_v_u64m1x6_tumu(vm, vd, rs1, vl); } -vfloat64m1x6_t test_vlseg6e64_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, size_t vl) { +vfloat64m1x6_t test_vlseg6e64_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg6e64_v_f64m1x6_mu(vm, vd, rs1, vl); } -vint64m1x6_t test_vlseg6e64_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, size_t vl) { +vint64m1x6_t test_vlseg6e64_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg6e64_v_i64m1x6_mu(vm, vd, rs1, vl); } -vuint64m1x6_t test_vlseg6e64_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x6_t test_vlseg6e64_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg6e64_v_u64m1x6_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c index 35092e288..4a919074d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c @@ -6,50 +6,73 @@ #include -vfloat64m1x6_t test_vlseg6e64ff_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x6_t test_vlseg6e64ff_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e64ff_v_f64m1x6_tu(vd, rs1, new_vl, vl); } -vint64m1x6_t test_vlseg6e64ff_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x6_t test_vlseg6e64ff_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e64ff_v_i64m1x6_tu(vd, rs1, new_vl, vl); } -vuint64m1x6_t test_vlseg6e64ff_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x6_t test_vlseg6e64ff_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e64ff_v_u64m1x6_tu(vd, rs1, new_vl, vl); } -vfloat64m1x6_t test_vlseg6e64ff_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x6_t test_vlseg6e64ff_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e64ff_v_f64m1x6_tum(vm, vd, rs1, new_vl, vl); } -vint64m1x6_t test_vlseg6e64ff_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x6_t test_vlseg6e64ff_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e64ff_v_i64m1x6_tum(vm, vd, rs1, new_vl, vl); } -vuint64m1x6_t test_vlseg6e64ff_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x6_t test_vlseg6e64ff_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e64ff_v_u64m1x6_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m1x6_t test_vlseg6e64ff_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x6_t test_vlseg6e64ff_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e64ff_v_f64m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vint64m1x6_t test_vlseg6e64ff_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x6_t test_vlseg6e64ff_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e64ff_v_i64m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m1x6_t test_vlseg6e64ff_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x6_t test_vlseg6e64ff_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e64ff_v_u64m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m1x6_t test_vlseg6e64ff_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x6_t test_vlseg6e64ff_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e64ff_v_f64m1x6_mu(vm, vd, rs1, new_vl, vl); } -vint64m1x6_t test_vlseg6e64ff_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x6_t test_vlseg6e64ff_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e64ff_v_i64m1x6_mu(vm, vd, rs1, new_vl, vl); } -vuint64m1x6_t test_vlseg6e64ff_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x6_t test_vlseg6e64ff_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e64ff_v_u64m1x6_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8.c index 3185f1f23..5f892dc0a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8.c @@ -5,130 +5,162 @@ #include -vint8mf8x6_t test_vlseg6e8_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x6_t test_vlseg6e8_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg6e8_v_i8mf8x6_tu(vd, rs1, vl); } -vint8mf4x6_t test_vlseg6e8_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x6_t test_vlseg6e8_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg6e8_v_i8mf4x6_tu(vd, rs1, vl); } -vint8mf2x6_t test_vlseg6e8_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x6_t test_vlseg6e8_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg6e8_v_i8mf2x6_tu(vd, rs1, vl); } -vint8m1x6_t test_vlseg6e8_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, size_t vl) { +vint8m1x6_t test_vlseg6e8_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg6e8_v_i8m1x6_tu(vd, rs1, vl); } -vuint8mf8x6_t test_vlseg6e8_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x6_t test_vlseg6e8_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg6e8_v_u8mf8x6_tu(vd, rs1, vl); } -vuint8mf4x6_t test_vlseg6e8_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x6_t test_vlseg6e8_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg6e8_v_u8mf4x6_tu(vd, rs1, vl); } -vuint8mf2x6_t test_vlseg6e8_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x6_t test_vlseg6e8_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg6e8_v_u8mf2x6_tu(vd, rs1, vl); } -vuint8m1x6_t test_vlseg6e8_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x6_t test_vlseg6e8_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg6e8_v_u8m1x6_tu(vd, rs1, vl); } -vint8mf8x6_t test_vlseg6e8_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x6_t test_vlseg6e8_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf8x6_tum(vm, vd, rs1, vl); } -vint8mf4x6_t test_vlseg6e8_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x6_t test_vlseg6e8_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf4x6_tum(vm, vd, rs1, vl); } -vint8mf2x6_t test_vlseg6e8_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x6_t test_vlseg6e8_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf2x6_tum(vm, vd, rs1, vl); } -vint8m1x6_t test_vlseg6e8_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, size_t vl) { +vint8m1x6_t test_vlseg6e8_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8m1x6_tum(vm, vd, rs1, vl); } -vuint8mf8x6_t test_vlseg6e8_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x6_t test_vlseg6e8_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf8x6_tum(vm, vd, rs1, vl); } -vuint8mf4x6_t test_vlseg6e8_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x6_t test_vlseg6e8_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf4x6_tum(vm, vd, rs1, vl); } -vuint8mf2x6_t test_vlseg6e8_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x6_t test_vlseg6e8_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf2x6_tum(vm, vd, rs1, vl); } -vuint8m1x6_t test_vlseg6e8_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x6_t test_vlseg6e8_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8m1x6_tum(vm, vd, rs1, vl); } -vint8mf8x6_t test_vlseg6e8_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x6_t test_vlseg6e8_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf8x6_tumu(vm, vd, rs1, vl); } -vint8mf4x6_t test_vlseg6e8_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x6_t test_vlseg6e8_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf4x6_tumu(vm, vd, rs1, vl); } -vint8mf2x6_t test_vlseg6e8_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x6_t test_vlseg6e8_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf2x6_tumu(vm, vd, rs1, vl); } -vint8m1x6_t test_vlseg6e8_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, size_t vl) { +vint8m1x6_t test_vlseg6e8_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8m1x6_tumu(vm, vd, rs1, vl); } -vuint8mf8x6_t test_vlseg6e8_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x6_t test_vlseg6e8_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf8x6_tumu(vm, vd, rs1, vl); } -vuint8mf4x6_t test_vlseg6e8_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x6_t test_vlseg6e8_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf4x6_tumu(vm, vd, rs1, vl); } -vuint8mf2x6_t test_vlseg6e8_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x6_t test_vlseg6e8_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf2x6_tumu(vm, vd, rs1, vl); } -vuint8m1x6_t test_vlseg6e8_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x6_t test_vlseg6e8_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8m1x6_tumu(vm, vd, rs1, vl); } -vint8mf8x6_t test_vlseg6e8_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x6_t test_vlseg6e8_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf8x6_mu(vm, vd, rs1, vl); } -vint8mf4x6_t test_vlseg6e8_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x6_t test_vlseg6e8_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf4x6_mu(vm, vd, rs1, vl); } -vint8mf2x6_t test_vlseg6e8_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x6_t test_vlseg6e8_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8mf2x6_mu(vm, vd, rs1, vl); } -vint8m1x6_t test_vlseg6e8_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, size_t vl) { +vint8m1x6_t test_vlseg6e8_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_i8m1x6_mu(vm, vd, rs1, vl); } -vuint8mf8x6_t test_vlseg6e8_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x6_t test_vlseg6e8_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf8x6_mu(vm, vd, rs1, vl); } -vuint8mf4x6_t test_vlseg6e8_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x6_t test_vlseg6e8_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf4x6_mu(vm, vd, rs1, vl); } -vuint8mf2x6_t test_vlseg6e8_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x6_t test_vlseg6e8_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8mf2x6_mu(vm, vd, rs1, vl); } -vuint8m1x6_t test_vlseg6e8_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x6_t test_vlseg6e8_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg6e8_v_u8m1x6_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c index 38a87f33e..f9f74e740 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c @@ -6,130 +6,186 @@ #include -vint8mf8x6_t test_vlseg6e8ff_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x6_t test_vlseg6e8ff_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e8ff_v_i8mf8x6_tu(vd, rs1, new_vl, vl); } -vint8mf4x6_t test_vlseg6e8ff_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x6_t test_vlseg6e8ff_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e8ff_v_i8mf4x6_tu(vd, rs1, new_vl, vl); } -vint8mf2x6_t test_vlseg6e8ff_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x6_t test_vlseg6e8ff_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e8ff_v_i8mf2x6_tu(vd, rs1, new_vl, vl); } -vint8m1x6_t test_vlseg6e8ff_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x6_t test_vlseg6e8ff_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e8ff_v_i8m1x6_tu(vd, rs1, new_vl, vl); } -vuint8mf8x6_t test_vlseg6e8ff_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x6_t test_vlseg6e8ff_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e8ff_v_u8mf8x6_tu(vd, rs1, new_vl, vl); } -vuint8mf4x6_t test_vlseg6e8ff_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x6_t test_vlseg6e8ff_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e8ff_v_u8mf4x6_tu(vd, rs1, new_vl, vl); } -vuint8mf2x6_t test_vlseg6e8ff_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x6_t test_vlseg6e8ff_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e8ff_v_u8mf2x6_tu(vd, rs1, new_vl, vl); } -vuint8m1x6_t test_vlseg6e8ff_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x6_t test_vlseg6e8ff_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e8ff_v_u8m1x6_tu(vd, rs1, new_vl, vl); } -vint8mf8x6_t test_vlseg6e8ff_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x6_t test_vlseg6e8ff_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf8x6_tum(vm, vd, rs1, new_vl, vl); } -vint8mf4x6_t test_vlseg6e8ff_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x6_t test_vlseg6e8ff_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf4x6_tum(vm, vd, rs1, new_vl, vl); } -vint8mf2x6_t test_vlseg6e8ff_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x6_t test_vlseg6e8ff_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vint8m1x6_t test_vlseg6e8ff_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x6_t test_vlseg6e8ff_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8m1x6_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf8x6_t test_vlseg6e8ff_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x6_t test_vlseg6e8ff_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf8x6_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf4x6_t test_vlseg6e8ff_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x6_t test_vlseg6e8ff_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf4x6_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf2x6_t test_vlseg6e8ff_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x6_t test_vlseg6e8ff_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vuint8m1x6_t test_vlseg6e8ff_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x6_t test_vlseg6e8ff_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8m1x6_tum(vm, vd, rs1, new_vl, vl); } -vint8mf8x6_t test_vlseg6e8ff_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x6_t test_vlseg6e8ff_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf8x6_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf4x6_t test_vlseg6e8ff_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x6_t test_vlseg6e8ff_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf4x6_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf2x6_t test_vlseg6e8ff_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x6_t test_vlseg6e8ff_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vint8m1x6_t test_vlseg6e8ff_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x6_t test_vlseg6e8ff_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x6_t test_vlseg6e8ff_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x6_t test_vlseg6e8ff_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf8x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x6_t test_vlseg6e8ff_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x6_t test_vlseg6e8ff_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf4x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x6_t test_vlseg6e8ff_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x6_t test_vlseg6e8ff_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m1x6_t test_vlseg6e8ff_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x6_t test_vlseg6e8ff_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf8x6_t test_vlseg6e8ff_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x6_t test_vlseg6e8ff_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf8x6_mu(vm, vd, rs1, new_vl, vl); } -vint8mf4x6_t test_vlseg6e8ff_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x6_t test_vlseg6e8ff_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf4x6_mu(vm, vd, rs1, new_vl, vl); } -vint8mf2x6_t test_vlseg6e8ff_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x6_t test_vlseg6e8ff_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vint8m1x6_t test_vlseg6e8ff_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x6_t test_vlseg6e8ff_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_i8m1x6_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x6_t test_vlseg6e8ff_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x6_t test_vlseg6e8ff_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf8x6_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x6_t test_vlseg6e8ff_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x6_t test_vlseg6e8ff_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf4x6_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x6_t test_vlseg6e8ff_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x6_t test_vlseg6e8ff_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vuint8m1x6_t test_vlseg6e8ff_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x6_t test_vlseg6e8ff_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg6e8ff_v_u8m1x6_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c index 6a44076f3..204a30dd7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c @@ -6,146 +6,182 @@ #include -vfloat16mf4x7_t test_vlseg7e16_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x7_t test_vlseg7e16_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16mf4x7_tu(vd, rs1, vl); } -vfloat16mf2x7_t test_vlseg7e16_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x7_t test_vlseg7e16_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16mf2x7_tu(vd, rs1, vl); } -vfloat16m1x7_t test_vlseg7e16_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x7_t test_vlseg7e16_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16m1x7_tu(vd, rs1, vl); } -vint16mf4x7_t test_vlseg7e16_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x7_t test_vlseg7e16_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg7e16_v_i16mf4x7_tu(vd, rs1, vl); } -vint16mf2x7_t test_vlseg7e16_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x7_t test_vlseg7e16_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg7e16_v_i16mf2x7_tu(vd, rs1, vl); } -vint16m1x7_t test_vlseg7e16_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, size_t vl) { +vint16m1x7_t test_vlseg7e16_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg7e16_v_i16m1x7_tu(vd, rs1, vl); } -vuint16mf4x7_t test_vlseg7e16_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x7_t test_vlseg7e16_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16mf4x7_tu(vd, rs1, vl); } -vuint16mf2x7_t test_vlseg7e16_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x7_t test_vlseg7e16_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16mf2x7_tu(vd, rs1, vl); } -vuint16m1x7_t test_vlseg7e16_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x7_t test_vlseg7e16_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg7e16_v_u16m1x7_tu(vd, rs1, vl); } -vfloat16mf4x7_t test_vlseg7e16_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x7_t test_vlseg7e16_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16mf4x7_tum(vm, vd, rs1, vl); } -vfloat16mf2x7_t test_vlseg7e16_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x7_t test_vlseg7e16_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16mf2x7_tum(vm, vd, rs1, vl); } -vfloat16m1x7_t test_vlseg7e16_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x7_t test_vlseg7e16_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16m1x7_tum(vm, vd, rs1, vl); } -vint16mf4x7_t test_vlseg7e16_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x7_t test_vlseg7e16_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16mf4x7_tum(vm, vd, rs1, vl); } -vint16mf2x7_t test_vlseg7e16_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x7_t test_vlseg7e16_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16mf2x7_tum(vm, vd, rs1, vl); } -vint16m1x7_t test_vlseg7e16_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, size_t vl) { +vint16m1x7_t test_vlseg7e16_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16m1x7_tum(vm, vd, rs1, vl); } -vuint16mf4x7_t test_vlseg7e16_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x7_t test_vlseg7e16_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16mf4x7_tum(vm, vd, rs1, vl); } -vuint16mf2x7_t test_vlseg7e16_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x7_t test_vlseg7e16_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16mf2x7_tum(vm, vd, rs1, vl); } -vuint16m1x7_t test_vlseg7e16_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x7_t test_vlseg7e16_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16m1x7_tum(vm, vd, rs1, vl); } -vfloat16mf4x7_t test_vlseg7e16_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x7_t test_vlseg7e16_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16mf4x7_tumu(vm, vd, rs1, vl); } -vfloat16mf2x7_t test_vlseg7e16_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x7_t test_vlseg7e16_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16mf2x7_tumu(vm, vd, rs1, vl); } -vfloat16m1x7_t test_vlseg7e16_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x7_t test_vlseg7e16_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16m1x7_tumu(vm, vd, rs1, vl); } -vint16mf4x7_t test_vlseg7e16_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x7_t test_vlseg7e16_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16mf4x7_tumu(vm, vd, rs1, vl); } -vint16mf2x7_t test_vlseg7e16_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x7_t test_vlseg7e16_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16mf2x7_tumu(vm, vd, rs1, vl); } -vint16m1x7_t test_vlseg7e16_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, size_t vl) { +vint16m1x7_t test_vlseg7e16_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16m1x7_tumu(vm, vd, rs1, vl); } -vuint16mf4x7_t test_vlseg7e16_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x7_t test_vlseg7e16_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16mf4x7_tumu(vm, vd, rs1, vl); } -vuint16mf2x7_t test_vlseg7e16_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x7_t test_vlseg7e16_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16mf2x7_tumu(vm, vd, rs1, vl); } -vuint16m1x7_t test_vlseg7e16_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x7_t test_vlseg7e16_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16m1x7_tumu(vm, vd, rs1, vl); } -vfloat16mf4x7_t test_vlseg7e16_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x7_t test_vlseg7e16_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16mf4x7_mu(vm, vd, rs1, vl); } -vfloat16mf2x7_t test_vlseg7e16_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x7_t test_vlseg7e16_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16mf2x7_mu(vm, vd, rs1, vl); } -vfloat16m1x7_t test_vlseg7e16_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x7_t test_vlseg7e16_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_f16m1x7_mu(vm, vd, rs1, vl); } -vint16mf4x7_t test_vlseg7e16_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x7_t test_vlseg7e16_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16mf4x7_mu(vm, vd, rs1, vl); } -vint16mf2x7_t test_vlseg7e16_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x7_t test_vlseg7e16_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16mf2x7_mu(vm, vd, rs1, vl); } -vint16m1x7_t test_vlseg7e16_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, size_t vl) { +vint16m1x7_t test_vlseg7e16_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_i16m1x7_mu(vm, vd, rs1, vl); } -vuint16mf4x7_t test_vlseg7e16_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x7_t test_vlseg7e16_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16mf4x7_mu(vm, vd, rs1, vl); } -vuint16mf2x7_t test_vlseg7e16_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x7_t test_vlseg7e16_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16mf2x7_mu(vm, vd, rs1, vl); } -vuint16m1x7_t test_vlseg7e16_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x7_t test_vlseg7e16_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg7e16_v_u16m1x7_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c index 5a3fba33d..28920e5be 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c @@ -6,146 +6,221 @@ #include -vfloat16mf4x7_t test_vlseg7e16ff_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x7_t test_vlseg7e16ff_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16mf4x7_tu(vd, rs1, new_vl, vl); } -vfloat16mf2x7_t test_vlseg7e16ff_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x7_t test_vlseg7e16ff_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16mf2x7_tu(vd, rs1, new_vl, vl); } -vfloat16m1x7_t test_vlseg7e16ff_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x7_t test_vlseg7e16ff_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16m1x7_tu(vd, rs1, new_vl, vl); } -vint16mf4x7_t test_vlseg7e16ff_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x7_t test_vlseg7e16ff_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_i16mf4x7_tu(vd, rs1, new_vl, vl); } -vint16mf2x7_t test_vlseg7e16ff_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x7_t test_vlseg7e16ff_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_i16mf2x7_tu(vd, rs1, new_vl, vl); } -vint16m1x7_t test_vlseg7e16ff_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x7_t test_vlseg7e16ff_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_i16m1x7_tu(vd, rs1, new_vl, vl); } -vuint16mf4x7_t test_vlseg7e16ff_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x7_t test_vlseg7e16ff_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16mf4x7_tu(vd, rs1, new_vl, vl); } -vuint16mf2x7_t test_vlseg7e16ff_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x7_t test_vlseg7e16ff_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16mf2x7_tu(vd, rs1, new_vl, vl); } -vuint16m1x7_t test_vlseg7e16ff_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x7_t test_vlseg7e16ff_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_u16m1x7_tu(vd, rs1, new_vl, vl); } -vfloat16mf4x7_t test_vlseg7e16ff_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x7_t test_vlseg7e16ff_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16mf4x7_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x7_t test_vlseg7e16ff_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x7_t test_vlseg7e16ff_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m1x7_t test_vlseg7e16ff_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x7_t test_vlseg7e16ff_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16m1x7_tum(vm, vd, rs1, new_vl, vl); } -vint16mf4x7_t test_vlseg7e16ff_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x7_t test_vlseg7e16ff_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_i16mf4x7_tum(vm, vd, rs1, new_vl, vl); } -vint16mf2x7_t test_vlseg7e16ff_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x7_t test_vlseg7e16ff_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_i16mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vint16m1x7_t test_vlseg7e16ff_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x7_t test_vlseg7e16ff_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_i16m1x7_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf4x7_t test_vlseg7e16ff_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x7_t test_vlseg7e16ff_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16mf4x7_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf2x7_t test_vlseg7e16ff_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x7_t test_vlseg7e16ff_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vuint16m1x7_t test_vlseg7e16ff_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x7_t test_vlseg7e16ff_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16m1x7_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x7_t test_vlseg7e16ff_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x7_t test_vlseg7e16ff_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16mf4x7_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x7_t test_vlseg7e16ff_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x7_t test_vlseg7e16ff_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x7_t test_vlseg7e16ff_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x7_t test_vlseg7e16ff_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf4x7_t test_vlseg7e16ff_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x7_t test_vlseg7e16ff_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_i16mf4x7_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf2x7_t test_vlseg7e16ff_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x7_t test_vlseg7e16ff_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_i16mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vint16m1x7_t test_vlseg7e16ff_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x7_t test_vlseg7e16ff_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_i16m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x7_t test_vlseg7e16ff_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x7_t test_vlseg7e16ff_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16mf4x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x7_t test_vlseg7e16ff_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x7_t test_vlseg7e16ff_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m1x7_t test_vlseg7e16ff_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x7_t test_vlseg7e16ff_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x7_t test_vlseg7e16ff_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x7_t test_vlseg7e16ff_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16mf4x7_mu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x7_t test_vlseg7e16ff_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x7_t test_vlseg7e16ff_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x7_t test_vlseg7e16ff_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x7_t test_vlseg7e16ff_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_f16m1x7_mu(vm, vd, rs1, new_vl, vl); } -vint16mf4x7_t test_vlseg7e16ff_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x7_t test_vlseg7e16ff_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_i16mf4x7_mu(vm, vd, rs1, new_vl, vl); } -vint16mf2x7_t test_vlseg7e16ff_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x7_t test_vlseg7e16ff_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_i16mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vint16m1x7_t test_vlseg7e16ff_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x7_t test_vlseg7e16ff_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_i16m1x7_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x7_t test_vlseg7e16ff_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x7_t test_vlseg7e16ff_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16mf4x7_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x7_t test_vlseg7e16ff_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x7_t test_vlseg7e16ff_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_u16mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vuint16m1x7_t test_vlseg7e16ff_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x7_t test_vlseg7e16ff_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e16ff_v_u16m1x7_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c index c7fe4b3ff..81031cf79 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c @@ -6,98 +6,122 @@ #include -vfloat32mf2x7_t test_vlseg7e32_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, size_t vl) { +vfloat32mf2x7_t test_vlseg7e32_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg7e32_v_f32mf2x7_tu(vd, rs1, vl); } -vfloat32m1x7_t test_vlseg7e32_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, size_t vl) { +vfloat32m1x7_t test_vlseg7e32_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg7e32_v_f32m1x7_tu(vd, rs1, vl); } -vint32mf2x7_t test_vlseg7e32_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x7_t test_vlseg7e32_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg7e32_v_i32mf2x7_tu(vd, rs1, vl); } -vint32m1x7_t test_vlseg7e32_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, size_t vl) { +vint32m1x7_t test_vlseg7e32_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg7e32_v_i32m1x7_tu(vd, rs1, vl); } -vuint32mf2x7_t test_vlseg7e32_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x7_t test_vlseg7e32_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_u32mf2x7_tu(vd, rs1, vl); } -vuint32m1x7_t test_vlseg7e32_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x7_t test_vlseg7e32_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg7e32_v_u32m1x7_tu(vd, rs1, vl); } -vfloat32mf2x7_t test_vlseg7e32_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, size_t vl) { +vfloat32mf2x7_t test_vlseg7e32_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg7e32_v_f32mf2x7_tum(vm, vd, rs1, vl); } -vfloat32m1x7_t test_vlseg7e32_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, size_t vl) { +vfloat32m1x7_t test_vlseg7e32_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg7e32_v_f32m1x7_tum(vm, vd, rs1, vl); } -vint32mf2x7_t test_vlseg7e32_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x7_t test_vlseg7e32_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_i32mf2x7_tum(vm, vd, rs1, vl); } -vint32m1x7_t test_vlseg7e32_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, size_t vl) { +vint32m1x7_t test_vlseg7e32_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_i32m1x7_tum(vm, vd, rs1, vl); } -vuint32mf2x7_t test_vlseg7e32_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x7_t test_vlseg7e32_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_u32mf2x7_tum(vm, vd, rs1, vl); } -vuint32m1x7_t test_vlseg7e32_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x7_t test_vlseg7e32_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_u32m1x7_tum(vm, vd, rs1, vl); } -vfloat32mf2x7_t test_vlseg7e32_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, size_t vl) { +vfloat32mf2x7_t test_vlseg7e32_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg7e32_v_f32mf2x7_tumu(vm, vd, rs1, vl); } -vfloat32m1x7_t test_vlseg7e32_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, size_t vl) { +vfloat32m1x7_t test_vlseg7e32_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg7e32_v_f32m1x7_tumu(vm, vd, rs1, vl); } -vint32mf2x7_t test_vlseg7e32_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x7_t test_vlseg7e32_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_i32mf2x7_tumu(vm, vd, rs1, vl); } -vint32m1x7_t test_vlseg7e32_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, size_t vl) { +vint32m1x7_t test_vlseg7e32_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_i32m1x7_tumu(vm, vd, rs1, vl); } -vuint32mf2x7_t test_vlseg7e32_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x7_t test_vlseg7e32_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_u32mf2x7_tumu(vm, vd, rs1, vl); } -vuint32m1x7_t test_vlseg7e32_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x7_t test_vlseg7e32_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_u32m1x7_tumu(vm, vd, rs1, vl); } -vfloat32mf2x7_t test_vlseg7e32_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, size_t vl) { +vfloat32mf2x7_t test_vlseg7e32_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg7e32_v_f32mf2x7_mu(vm, vd, rs1, vl); } -vfloat32m1x7_t test_vlseg7e32_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, size_t vl) { +vfloat32m1x7_t test_vlseg7e32_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg7e32_v_f32m1x7_mu(vm, vd, rs1, vl); } -vint32mf2x7_t test_vlseg7e32_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x7_t test_vlseg7e32_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_i32mf2x7_mu(vm, vd, rs1, vl); } -vint32m1x7_t test_vlseg7e32_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, size_t vl) { +vint32m1x7_t test_vlseg7e32_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_i32m1x7_mu(vm, vd, rs1, vl); } -vuint32mf2x7_t test_vlseg7e32_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x7_t test_vlseg7e32_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_u32mf2x7_mu(vm, vd, rs1, vl); } -vuint32m1x7_t test_vlseg7e32_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x7_t test_vlseg7e32_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg7e32_v_u32m1x7_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c index cf30aa4d9..dee7b7717 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c @@ -6,98 +6,147 @@ #include -vfloat32mf2x7_t test_vlseg7e32ff_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x7_t test_vlseg7e32ff_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_f32mf2x7_tu(vd, rs1, new_vl, vl); } -vfloat32m1x7_t test_vlseg7e32ff_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x7_t test_vlseg7e32ff_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_f32m1x7_tu(vd, rs1, new_vl, vl); } -vint32mf2x7_t test_vlseg7e32ff_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x7_t test_vlseg7e32ff_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_i32mf2x7_tu(vd, rs1, new_vl, vl); } -vint32m1x7_t test_vlseg7e32ff_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x7_t test_vlseg7e32ff_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_i32m1x7_tu(vd, rs1, new_vl, vl); } -vuint32mf2x7_t test_vlseg7e32ff_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x7_t test_vlseg7e32ff_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_u32mf2x7_tu(vd, rs1, new_vl, vl); } -vuint32m1x7_t test_vlseg7e32ff_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x7_t test_vlseg7e32ff_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_u32m1x7_tu(vd, rs1, new_vl, vl); } -vfloat32mf2x7_t test_vlseg7e32ff_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x7_t test_vlseg7e32ff_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_f32mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m1x7_t test_vlseg7e32ff_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x7_t test_vlseg7e32ff_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_f32m1x7_tum(vm, vd, rs1, new_vl, vl); } -vint32mf2x7_t test_vlseg7e32ff_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x7_t test_vlseg7e32ff_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_i32mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vint32m1x7_t test_vlseg7e32ff_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x7_t test_vlseg7e32ff_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_i32m1x7_tum(vm, vd, rs1, new_vl, vl); } -vuint32mf2x7_t test_vlseg7e32ff_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x7_t test_vlseg7e32ff_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_u32mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vuint32m1x7_t test_vlseg7e32ff_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x7_t test_vlseg7e32ff_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_u32m1x7_tum(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x7_t test_vlseg7e32ff_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x7_t test_vlseg7e32ff_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_f32mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x7_t test_vlseg7e32ff_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x7_t test_vlseg7e32ff_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_f32m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vint32mf2x7_t test_vlseg7e32ff_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x7_t test_vlseg7e32ff_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_i32mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vint32m1x7_t test_vlseg7e32ff_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x7_t test_vlseg7e32ff_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_i32m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x7_t test_vlseg7e32ff_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x7_t test_vlseg7e32ff_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_u32mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m1x7_t test_vlseg7e32ff_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x7_t test_vlseg7e32ff_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_u32m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x7_t test_vlseg7e32ff_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x7_t test_vlseg7e32ff_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_f32mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x7_t test_vlseg7e32ff_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x7_t test_vlseg7e32ff_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_f32m1x7_mu(vm, vd, rs1, new_vl, vl); } -vint32mf2x7_t test_vlseg7e32ff_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x7_t test_vlseg7e32ff_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_i32mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vint32m1x7_t test_vlseg7e32ff_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x7_t test_vlseg7e32ff_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_i32m1x7_mu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x7_t test_vlseg7e32ff_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x7_t test_vlseg7e32ff_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e32ff_v_u32mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vuint32m1x7_t test_vlseg7e32ff_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x7_t test_vlseg7e32ff_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e32ff_v_u32m1x7_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c index 40092b7c6..9cbc46ff3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c @@ -6,50 +6,62 @@ #include -vfloat64m1x7_t test_vlseg7e64_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, size_t vl) { +vfloat64m1x7_t test_vlseg7e64_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg7e64_v_f64m1x7_tu(vd, rs1, vl); } -vint64m1x7_t test_vlseg7e64_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, size_t vl) { +vint64m1x7_t test_vlseg7e64_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg7e64_v_i64m1x7_tu(vd, rs1, vl); } -vuint64m1x7_t test_vlseg7e64_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x7_t test_vlseg7e64_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg7e64_v_u64m1x7_tu(vd, rs1, vl); } -vfloat64m1x7_t test_vlseg7e64_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, size_t vl) { +vfloat64m1x7_t test_vlseg7e64_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg7e64_v_f64m1x7_tum(vm, vd, rs1, vl); } -vint64m1x7_t test_vlseg7e64_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, size_t vl) { +vint64m1x7_t test_vlseg7e64_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg7e64_v_i64m1x7_tum(vm, vd, rs1, vl); } -vuint64m1x7_t test_vlseg7e64_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x7_t test_vlseg7e64_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg7e64_v_u64m1x7_tum(vm, vd, rs1, vl); } -vfloat64m1x7_t test_vlseg7e64_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, size_t vl) { +vfloat64m1x7_t test_vlseg7e64_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg7e64_v_f64m1x7_tumu(vm, vd, rs1, vl); } -vint64m1x7_t test_vlseg7e64_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, size_t vl) { +vint64m1x7_t test_vlseg7e64_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg7e64_v_i64m1x7_tumu(vm, vd, rs1, vl); } -vuint64m1x7_t test_vlseg7e64_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x7_t test_vlseg7e64_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg7e64_v_u64m1x7_tumu(vm, vd, rs1, vl); } -vfloat64m1x7_t test_vlseg7e64_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, size_t vl) { +vfloat64m1x7_t test_vlseg7e64_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg7e64_v_f64m1x7_mu(vm, vd, rs1, vl); } -vint64m1x7_t test_vlseg7e64_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, size_t vl) { +vint64m1x7_t test_vlseg7e64_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg7e64_v_i64m1x7_mu(vm, vd, rs1, vl); } -vuint64m1x7_t test_vlseg7e64_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x7_t test_vlseg7e64_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg7e64_v_u64m1x7_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c index b4332349a..a2e28f5a7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c @@ -6,50 +6,73 @@ #include -vfloat64m1x7_t test_vlseg7e64ff_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x7_t test_vlseg7e64ff_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e64ff_v_f64m1x7_tu(vd, rs1, new_vl, vl); } -vint64m1x7_t test_vlseg7e64ff_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x7_t test_vlseg7e64ff_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e64ff_v_i64m1x7_tu(vd, rs1, new_vl, vl); } -vuint64m1x7_t test_vlseg7e64ff_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x7_t test_vlseg7e64ff_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e64ff_v_u64m1x7_tu(vd, rs1, new_vl, vl); } -vfloat64m1x7_t test_vlseg7e64ff_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x7_t test_vlseg7e64ff_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e64ff_v_f64m1x7_tum(vm, vd, rs1, new_vl, vl); } -vint64m1x7_t test_vlseg7e64ff_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x7_t test_vlseg7e64ff_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e64ff_v_i64m1x7_tum(vm, vd, rs1, new_vl, vl); } -vuint64m1x7_t test_vlseg7e64ff_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x7_t test_vlseg7e64ff_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e64ff_v_u64m1x7_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m1x7_t test_vlseg7e64ff_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x7_t test_vlseg7e64ff_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e64ff_v_f64m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vint64m1x7_t test_vlseg7e64ff_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x7_t test_vlseg7e64ff_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e64ff_v_i64m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m1x7_t test_vlseg7e64ff_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x7_t test_vlseg7e64ff_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e64ff_v_u64m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m1x7_t test_vlseg7e64ff_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x7_t test_vlseg7e64ff_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e64ff_v_f64m1x7_mu(vm, vd, rs1, new_vl, vl); } -vint64m1x7_t test_vlseg7e64ff_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x7_t test_vlseg7e64ff_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e64ff_v_i64m1x7_mu(vm, vd, rs1, new_vl, vl); } -vuint64m1x7_t test_vlseg7e64ff_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x7_t test_vlseg7e64ff_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e64ff_v_u64m1x7_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8.c index e4f81dd92..d302d5d6b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8.c @@ -5,130 +5,162 @@ #include -vint8mf8x7_t test_vlseg7e8_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x7_t test_vlseg7e8_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg7e8_v_i8mf8x7_tu(vd, rs1, vl); } -vint8mf4x7_t test_vlseg7e8_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x7_t test_vlseg7e8_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg7e8_v_i8mf4x7_tu(vd, rs1, vl); } -vint8mf2x7_t test_vlseg7e8_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x7_t test_vlseg7e8_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg7e8_v_i8mf2x7_tu(vd, rs1, vl); } -vint8m1x7_t test_vlseg7e8_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, size_t vl) { +vint8m1x7_t test_vlseg7e8_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg7e8_v_i8m1x7_tu(vd, rs1, vl); } -vuint8mf8x7_t test_vlseg7e8_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x7_t test_vlseg7e8_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg7e8_v_u8mf8x7_tu(vd, rs1, vl); } -vuint8mf4x7_t test_vlseg7e8_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x7_t test_vlseg7e8_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg7e8_v_u8mf4x7_tu(vd, rs1, vl); } -vuint8mf2x7_t test_vlseg7e8_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x7_t test_vlseg7e8_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg7e8_v_u8mf2x7_tu(vd, rs1, vl); } -vuint8m1x7_t test_vlseg7e8_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x7_t test_vlseg7e8_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg7e8_v_u8m1x7_tu(vd, rs1, vl); } -vint8mf8x7_t test_vlseg7e8_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x7_t test_vlseg7e8_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf8x7_tum(vm, vd, rs1, vl); } -vint8mf4x7_t test_vlseg7e8_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x7_t test_vlseg7e8_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf4x7_tum(vm, vd, rs1, vl); } -vint8mf2x7_t test_vlseg7e8_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x7_t test_vlseg7e8_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf2x7_tum(vm, vd, rs1, vl); } -vint8m1x7_t test_vlseg7e8_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, size_t vl) { +vint8m1x7_t test_vlseg7e8_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8m1x7_tum(vm, vd, rs1, vl); } -vuint8mf8x7_t test_vlseg7e8_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x7_t test_vlseg7e8_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf8x7_tum(vm, vd, rs1, vl); } -vuint8mf4x7_t test_vlseg7e8_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x7_t test_vlseg7e8_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf4x7_tum(vm, vd, rs1, vl); } -vuint8mf2x7_t test_vlseg7e8_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x7_t test_vlseg7e8_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf2x7_tum(vm, vd, rs1, vl); } -vuint8m1x7_t test_vlseg7e8_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x7_t test_vlseg7e8_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8m1x7_tum(vm, vd, rs1, vl); } -vint8mf8x7_t test_vlseg7e8_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x7_t test_vlseg7e8_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf8x7_tumu(vm, vd, rs1, vl); } -vint8mf4x7_t test_vlseg7e8_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x7_t test_vlseg7e8_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf4x7_tumu(vm, vd, rs1, vl); } -vint8mf2x7_t test_vlseg7e8_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x7_t test_vlseg7e8_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf2x7_tumu(vm, vd, rs1, vl); } -vint8m1x7_t test_vlseg7e8_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, size_t vl) { +vint8m1x7_t test_vlseg7e8_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8m1x7_tumu(vm, vd, rs1, vl); } -vuint8mf8x7_t test_vlseg7e8_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x7_t test_vlseg7e8_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf8x7_tumu(vm, vd, rs1, vl); } -vuint8mf4x7_t test_vlseg7e8_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x7_t test_vlseg7e8_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf4x7_tumu(vm, vd, rs1, vl); } -vuint8mf2x7_t test_vlseg7e8_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x7_t test_vlseg7e8_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf2x7_tumu(vm, vd, rs1, vl); } -vuint8m1x7_t test_vlseg7e8_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x7_t test_vlseg7e8_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8m1x7_tumu(vm, vd, rs1, vl); } -vint8mf8x7_t test_vlseg7e8_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x7_t test_vlseg7e8_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf8x7_mu(vm, vd, rs1, vl); } -vint8mf4x7_t test_vlseg7e8_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x7_t test_vlseg7e8_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf4x7_mu(vm, vd, rs1, vl); } -vint8mf2x7_t test_vlseg7e8_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x7_t test_vlseg7e8_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8mf2x7_mu(vm, vd, rs1, vl); } -vint8m1x7_t test_vlseg7e8_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, size_t vl) { +vint8m1x7_t test_vlseg7e8_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_i8m1x7_mu(vm, vd, rs1, vl); } -vuint8mf8x7_t test_vlseg7e8_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x7_t test_vlseg7e8_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf8x7_mu(vm, vd, rs1, vl); } -vuint8mf4x7_t test_vlseg7e8_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x7_t test_vlseg7e8_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf4x7_mu(vm, vd, rs1, vl); } -vuint8mf2x7_t test_vlseg7e8_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x7_t test_vlseg7e8_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8mf2x7_mu(vm, vd, rs1, vl); } -vuint8m1x7_t test_vlseg7e8_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x7_t test_vlseg7e8_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg7e8_v_u8m1x7_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c index 0087f5be3..b962005fc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c @@ -6,130 +6,186 @@ #include -vint8mf8x7_t test_vlseg7e8ff_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x7_t test_vlseg7e8ff_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e8ff_v_i8mf8x7_tu(vd, rs1, new_vl, vl); } -vint8mf4x7_t test_vlseg7e8ff_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x7_t test_vlseg7e8ff_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e8ff_v_i8mf4x7_tu(vd, rs1, new_vl, vl); } -vint8mf2x7_t test_vlseg7e8ff_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x7_t test_vlseg7e8ff_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e8ff_v_i8mf2x7_tu(vd, rs1, new_vl, vl); } -vint8m1x7_t test_vlseg7e8ff_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x7_t test_vlseg7e8ff_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e8ff_v_i8m1x7_tu(vd, rs1, new_vl, vl); } -vuint8mf8x7_t test_vlseg7e8ff_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x7_t test_vlseg7e8ff_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e8ff_v_u8mf8x7_tu(vd, rs1, new_vl, vl); } -vuint8mf4x7_t test_vlseg7e8ff_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x7_t test_vlseg7e8ff_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e8ff_v_u8mf4x7_tu(vd, rs1, new_vl, vl); } -vuint8mf2x7_t test_vlseg7e8ff_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x7_t test_vlseg7e8ff_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e8ff_v_u8mf2x7_tu(vd, rs1, new_vl, vl); } -vuint8m1x7_t test_vlseg7e8ff_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x7_t test_vlseg7e8ff_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e8ff_v_u8m1x7_tu(vd, rs1, new_vl, vl); } -vint8mf8x7_t test_vlseg7e8ff_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x7_t test_vlseg7e8ff_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf8x7_tum(vm, vd, rs1, new_vl, vl); } -vint8mf4x7_t test_vlseg7e8ff_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x7_t test_vlseg7e8ff_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf4x7_tum(vm, vd, rs1, new_vl, vl); } -vint8mf2x7_t test_vlseg7e8ff_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x7_t test_vlseg7e8ff_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vint8m1x7_t test_vlseg7e8ff_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x7_t test_vlseg7e8ff_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8m1x7_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf8x7_t test_vlseg7e8ff_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x7_t test_vlseg7e8ff_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf8x7_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf4x7_t test_vlseg7e8ff_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x7_t test_vlseg7e8ff_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf4x7_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf2x7_t test_vlseg7e8ff_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x7_t test_vlseg7e8ff_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vuint8m1x7_t test_vlseg7e8ff_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x7_t test_vlseg7e8ff_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8m1x7_tum(vm, vd, rs1, new_vl, vl); } -vint8mf8x7_t test_vlseg7e8ff_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x7_t test_vlseg7e8ff_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf8x7_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf4x7_t test_vlseg7e8ff_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x7_t test_vlseg7e8ff_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf4x7_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf2x7_t test_vlseg7e8ff_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x7_t test_vlseg7e8ff_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vint8m1x7_t test_vlseg7e8ff_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x7_t test_vlseg7e8ff_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x7_t test_vlseg7e8ff_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x7_t test_vlseg7e8ff_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf8x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x7_t test_vlseg7e8ff_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x7_t test_vlseg7e8ff_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf4x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x7_t test_vlseg7e8ff_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x7_t test_vlseg7e8ff_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m1x7_t test_vlseg7e8ff_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x7_t test_vlseg7e8ff_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf8x7_t test_vlseg7e8ff_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x7_t test_vlseg7e8ff_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf8x7_mu(vm, vd, rs1, new_vl, vl); } -vint8mf4x7_t test_vlseg7e8ff_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x7_t test_vlseg7e8ff_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf4x7_mu(vm, vd, rs1, new_vl, vl); } -vint8mf2x7_t test_vlseg7e8ff_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x7_t test_vlseg7e8ff_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vint8m1x7_t test_vlseg7e8ff_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x7_t test_vlseg7e8ff_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_i8m1x7_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x7_t test_vlseg7e8ff_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x7_t test_vlseg7e8ff_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf8x7_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x7_t test_vlseg7e8ff_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x7_t test_vlseg7e8ff_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf4x7_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x7_t test_vlseg7e8ff_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x7_t test_vlseg7e8ff_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vuint8m1x7_t test_vlseg7e8ff_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x7_t test_vlseg7e8ff_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg7e8ff_v_u8m1x7_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c index 02e8e5974..20668ac56 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c @@ -6,146 +6,182 @@ #include -vfloat16mf4x8_t test_vlseg8e16_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x8_t test_vlseg8e16_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16mf4x8_tu(vd, rs1, vl); } -vfloat16mf2x8_t test_vlseg8e16_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x8_t test_vlseg8e16_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16mf2x8_tu(vd, rs1, vl); } -vfloat16m1x8_t test_vlseg8e16_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x8_t test_vlseg8e16_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16m1x8_tu(vd, rs1, vl); } -vint16mf4x8_t test_vlseg8e16_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x8_t test_vlseg8e16_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg8e16_v_i16mf4x8_tu(vd, rs1, vl); } -vint16mf2x8_t test_vlseg8e16_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x8_t test_vlseg8e16_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg8e16_v_i16mf2x8_tu(vd, rs1, vl); } -vint16m1x8_t test_vlseg8e16_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, size_t vl) { +vint16m1x8_t test_vlseg8e16_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + size_t vl) { return __riscv_vlseg8e16_v_i16m1x8_tu(vd, rs1, vl); } -vuint16mf4x8_t test_vlseg8e16_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x8_t test_vlseg8e16_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16mf4x8_tu(vd, rs1, vl); } -vuint16mf2x8_t test_vlseg8e16_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x8_t test_vlseg8e16_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16mf2x8_tu(vd, rs1, vl); } -vuint16m1x8_t test_vlseg8e16_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x8_t test_vlseg8e16_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, + size_t vl) { return __riscv_vlseg8e16_v_u16m1x8_tu(vd, rs1, vl); } -vfloat16mf4x8_t test_vlseg8e16_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x8_t test_vlseg8e16_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16mf4x8_tum(vm, vd, rs1, vl); } -vfloat16mf2x8_t test_vlseg8e16_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x8_t test_vlseg8e16_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16mf2x8_tum(vm, vd, rs1, vl); } -vfloat16m1x8_t test_vlseg8e16_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x8_t test_vlseg8e16_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16m1x8_tum(vm, vd, rs1, vl); } -vint16mf4x8_t test_vlseg8e16_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x8_t test_vlseg8e16_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16mf4x8_tum(vm, vd, rs1, vl); } -vint16mf2x8_t test_vlseg8e16_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x8_t test_vlseg8e16_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16mf2x8_tum(vm, vd, rs1, vl); } -vint16m1x8_t test_vlseg8e16_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, size_t vl) { +vint16m1x8_t test_vlseg8e16_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16m1x8_tum(vm, vd, rs1, vl); } -vuint16mf4x8_t test_vlseg8e16_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x8_t test_vlseg8e16_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16mf4x8_tum(vm, vd, rs1, vl); } -vuint16mf2x8_t test_vlseg8e16_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x8_t test_vlseg8e16_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16mf2x8_tum(vm, vd, rs1, vl); } -vuint16m1x8_t test_vlseg8e16_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x8_t test_vlseg8e16_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16m1x8_tum(vm, vd, rs1, vl); } -vfloat16mf4x8_t test_vlseg8e16_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x8_t test_vlseg8e16_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16mf4x8_tumu(vm, vd, rs1, vl); } -vfloat16mf2x8_t test_vlseg8e16_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x8_t test_vlseg8e16_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16mf2x8_tumu(vm, vd, rs1, vl); } -vfloat16m1x8_t test_vlseg8e16_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x8_t test_vlseg8e16_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16m1x8_tumu(vm, vd, rs1, vl); } -vint16mf4x8_t test_vlseg8e16_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x8_t test_vlseg8e16_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16mf4x8_tumu(vm, vd, rs1, vl); } -vint16mf2x8_t test_vlseg8e16_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x8_t test_vlseg8e16_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16mf2x8_tumu(vm, vd, rs1, vl); } -vint16m1x8_t test_vlseg8e16_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, size_t vl) { +vint16m1x8_t test_vlseg8e16_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16m1x8_tumu(vm, vd, rs1, vl); } -vuint16mf4x8_t test_vlseg8e16_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x8_t test_vlseg8e16_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16mf4x8_tumu(vm, vd, rs1, vl); } -vuint16mf2x8_t test_vlseg8e16_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x8_t test_vlseg8e16_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16mf2x8_tumu(vm, vd, rs1, vl); } -vuint16m1x8_t test_vlseg8e16_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x8_t test_vlseg8e16_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16m1x8_tumu(vm, vd, rs1, vl); } -vfloat16mf4x8_t test_vlseg8e16_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf4x8_t test_vlseg8e16_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16mf4x8_mu(vm, vd, rs1, vl); } -vfloat16mf2x8_t test_vlseg8e16_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16mf2x8_t test_vlseg8e16_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16mf2x8_mu(vm, vd, rs1, vl); } -vfloat16m1x8_t test_vlseg8e16_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, size_t vl) { +vfloat16m1x8_t test_vlseg8e16_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_f16m1x8_mu(vm, vd, rs1, vl); } -vint16mf4x8_t test_vlseg8e16_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, size_t vl) { +vint16mf4x8_t test_vlseg8e16_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16mf4x8_mu(vm, vd, rs1, vl); } -vint16mf2x8_t test_vlseg8e16_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, size_t vl) { +vint16mf2x8_t test_vlseg8e16_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16mf2x8_mu(vm, vd, rs1, vl); } -vint16m1x8_t test_vlseg8e16_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, size_t vl) { +vint16m1x8_t test_vlseg8e16_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_i16m1x8_mu(vm, vd, rs1, vl); } -vuint16mf4x8_t test_vlseg8e16_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf4x8_t test_vlseg8e16_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16mf4x8_mu(vm, vd, rs1, vl); } -vuint16mf2x8_t test_vlseg8e16_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16mf2x8_t test_vlseg8e16_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16mf2x8_mu(vm, vd, rs1, vl); } -vuint16m1x8_t test_vlseg8e16_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, size_t vl) { +vuint16m1x8_t test_vlseg8e16_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, size_t vl) { return __riscv_vlseg8e16_v_u16m1x8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c index a0153b112..eb3098659 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c @@ -6,146 +6,221 @@ #include -vfloat16mf4x8_t test_vlseg8e16ff_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x8_t test_vlseg8e16ff_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16mf4x8_tu(vd, rs1, new_vl, vl); } -vfloat16mf2x8_t test_vlseg8e16ff_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x8_t test_vlseg8e16ff_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16mf2x8_tu(vd, rs1, new_vl, vl); } -vfloat16m1x8_t test_vlseg8e16ff_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x8_t test_vlseg8e16ff_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16m1x8_tu(vd, rs1, new_vl, vl); } -vint16mf4x8_t test_vlseg8e16ff_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x8_t test_vlseg8e16ff_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_i16mf4x8_tu(vd, rs1, new_vl, vl); } -vint16mf2x8_t test_vlseg8e16ff_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x8_t test_vlseg8e16ff_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_i16mf2x8_tu(vd, rs1, new_vl, vl); } -vint16m1x8_t test_vlseg8e16ff_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x8_t test_vlseg8e16ff_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_i16m1x8_tu(vd, rs1, new_vl, vl); } -vuint16mf4x8_t test_vlseg8e16ff_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x8_t test_vlseg8e16ff_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16mf4x8_tu(vd, rs1, new_vl, vl); } -vuint16mf2x8_t test_vlseg8e16ff_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x8_t test_vlseg8e16ff_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16mf2x8_tu(vd, rs1, new_vl, vl); } -vuint16m1x8_t test_vlseg8e16ff_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x8_t test_vlseg8e16ff_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_u16m1x8_tu(vd, rs1, new_vl, vl); } -vfloat16mf4x8_t test_vlseg8e16ff_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x8_t test_vlseg8e16ff_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16mf4x8_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x8_t test_vlseg8e16ff_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x8_t test_vlseg8e16ff_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vfloat16m1x8_t test_vlseg8e16ff_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x8_t test_vlseg8e16ff_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16m1x8_tum(vm, vd, rs1, new_vl, vl); } -vint16mf4x8_t test_vlseg8e16ff_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x8_t test_vlseg8e16ff_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_i16mf4x8_tum(vm, vd, rs1, new_vl, vl); } -vint16mf2x8_t test_vlseg8e16ff_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x8_t test_vlseg8e16ff_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_i16mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vint16m1x8_t test_vlseg8e16ff_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x8_t test_vlseg8e16ff_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_i16m1x8_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf4x8_t test_vlseg8e16ff_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x8_t test_vlseg8e16ff_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16mf4x8_tum(vm, vd, rs1, new_vl, vl); } -vuint16mf2x8_t test_vlseg8e16ff_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x8_t test_vlseg8e16ff_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vuint16m1x8_t test_vlseg8e16ff_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x8_t test_vlseg8e16ff_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16m1x8_tum(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x8_t test_vlseg8e16ff_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x8_t test_vlseg8e16ff_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16mf4x8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x8_t test_vlseg8e16ff_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x8_t test_vlseg8e16ff_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x8_t test_vlseg8e16ff_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x8_t test_vlseg8e16ff_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf4x8_t test_vlseg8e16ff_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x8_t test_vlseg8e16ff_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_i16mf4x8_tumu(vm, vd, rs1, new_vl, vl); } -vint16mf2x8_t test_vlseg8e16ff_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x8_t test_vlseg8e16ff_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_i16mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vint16m1x8_t test_vlseg8e16ff_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x8_t test_vlseg8e16ff_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_i16m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x8_t test_vlseg8e16ff_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x8_t test_vlseg8e16ff_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16mf4x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x8_t test_vlseg8e16ff_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x8_t test_vlseg8e16ff_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint16m1x8_t test_vlseg8e16ff_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x8_t test_vlseg8e16ff_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat16mf4x8_t test_vlseg8e16ff_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf4x8_t test_vlseg8e16ff_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16mf4x8_mu(vm, vd, rs1, new_vl, vl); } -vfloat16mf2x8_t test_vlseg8e16ff_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16mf2x8_t test_vlseg8e16ff_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vfloat16m1x8_t test_vlseg8e16ff_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, size_t *new_vl, size_t vl) { +vfloat16m1x8_t test_vlseg8e16ff_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_f16m1x8_mu(vm, vd, rs1, new_vl, vl); } -vint16mf4x8_t test_vlseg8e16ff_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf4x8_t test_vlseg8e16ff_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_i16mf4x8_mu(vm, vd, rs1, new_vl, vl); } -vint16mf2x8_t test_vlseg8e16ff_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16mf2x8_t test_vlseg8e16ff_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_i16mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vint16m1x8_t test_vlseg8e16ff_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, size_t *new_vl, size_t vl) { +vint16m1x8_t test_vlseg8e16ff_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_i16m1x8_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf4x8_t test_vlseg8e16ff_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf4x8_t test_vlseg8e16ff_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16mf4x8_mu(vm, vd, rs1, new_vl, vl); } -vuint16mf2x8_t test_vlseg8e16ff_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16mf2x8_t test_vlseg8e16ff_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_u16mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vuint16m1x8_t test_vlseg8e16ff_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, size_t *new_vl, size_t vl) { +vuint16m1x8_t test_vlseg8e16ff_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e16ff_v_u16m1x8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c index 0752a67e7..5dc0cb18e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c @@ -6,98 +6,122 @@ #include -vfloat32mf2x8_t test_vlseg8e32_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, size_t vl) { +vfloat32mf2x8_t test_vlseg8e32_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg8e32_v_f32mf2x8_tu(vd, rs1, vl); } -vfloat32m1x8_t test_vlseg8e32_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, size_t vl) { +vfloat32m1x8_t test_vlseg8e32_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, + size_t vl) { return __riscv_vlseg8e32_v_f32m1x8_tu(vd, rs1, vl); } -vint32mf2x8_t test_vlseg8e32_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x8_t test_vlseg8e32_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg8e32_v_i32mf2x8_tu(vd, rs1, vl); } -vint32m1x8_t test_vlseg8e32_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, size_t vl) { +vint32m1x8_t test_vlseg8e32_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + size_t vl) { return __riscv_vlseg8e32_v_i32m1x8_tu(vd, rs1, vl); } -vuint32mf2x8_t test_vlseg8e32_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x8_t test_vlseg8e32_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_u32mf2x8_tu(vd, rs1, vl); } -vuint32m1x8_t test_vlseg8e32_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x8_t test_vlseg8e32_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, + size_t vl) { return __riscv_vlseg8e32_v_u32m1x8_tu(vd, rs1, vl); } -vfloat32mf2x8_t test_vlseg8e32_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, size_t vl) { +vfloat32mf2x8_t test_vlseg8e32_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg8e32_v_f32mf2x8_tum(vm, vd, rs1, vl); } -vfloat32m1x8_t test_vlseg8e32_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, size_t vl) { +vfloat32m1x8_t test_vlseg8e32_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg8e32_v_f32m1x8_tum(vm, vd, rs1, vl); } -vint32mf2x8_t test_vlseg8e32_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x8_t test_vlseg8e32_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_i32mf2x8_tum(vm, vd, rs1, vl); } -vint32m1x8_t test_vlseg8e32_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, size_t vl) { +vint32m1x8_t test_vlseg8e32_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_i32m1x8_tum(vm, vd, rs1, vl); } -vuint32mf2x8_t test_vlseg8e32_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x8_t test_vlseg8e32_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_u32mf2x8_tum(vm, vd, rs1, vl); } -vuint32m1x8_t test_vlseg8e32_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x8_t test_vlseg8e32_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_u32m1x8_tum(vm, vd, rs1, vl); } -vfloat32mf2x8_t test_vlseg8e32_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, size_t vl) { +vfloat32mf2x8_t test_vlseg8e32_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg8e32_v_f32mf2x8_tumu(vm, vd, rs1, vl); } -vfloat32m1x8_t test_vlseg8e32_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, size_t vl) { +vfloat32m1x8_t test_vlseg8e32_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg8e32_v_f32m1x8_tumu(vm, vd, rs1, vl); } -vint32mf2x8_t test_vlseg8e32_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x8_t test_vlseg8e32_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_i32mf2x8_tumu(vm, vd, rs1, vl); } -vint32m1x8_t test_vlseg8e32_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, size_t vl) { +vint32m1x8_t test_vlseg8e32_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_i32m1x8_tumu(vm, vd, rs1, vl); } -vuint32mf2x8_t test_vlseg8e32_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x8_t test_vlseg8e32_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_u32mf2x8_tumu(vm, vd, rs1, vl); } -vuint32m1x8_t test_vlseg8e32_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x8_t test_vlseg8e32_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_u32m1x8_tumu(vm, vd, rs1, vl); } -vfloat32mf2x8_t test_vlseg8e32_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, size_t vl) { +vfloat32mf2x8_t test_vlseg8e32_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg8e32_v_f32mf2x8_mu(vm, vd, rs1, vl); } -vfloat32m1x8_t test_vlseg8e32_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, size_t vl) { +vfloat32m1x8_t test_vlseg8e32_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, size_t vl) { return __riscv_vlseg8e32_v_f32m1x8_mu(vm, vd, rs1, vl); } -vint32mf2x8_t test_vlseg8e32_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, size_t vl) { +vint32mf2x8_t test_vlseg8e32_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_i32mf2x8_mu(vm, vd, rs1, vl); } -vint32m1x8_t test_vlseg8e32_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, size_t vl) { +vint32m1x8_t test_vlseg8e32_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_i32m1x8_mu(vm, vd, rs1, vl); } -vuint32mf2x8_t test_vlseg8e32_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, size_t vl) { +vuint32mf2x8_t test_vlseg8e32_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_u32mf2x8_mu(vm, vd, rs1, vl); } -vuint32m1x8_t test_vlseg8e32_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, size_t vl) { +vuint32m1x8_t test_vlseg8e32_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, size_t vl) { return __riscv_vlseg8e32_v_u32m1x8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c index 1a96fb8f5..90329199e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c @@ -6,98 +6,147 @@ #include -vfloat32mf2x8_t test_vlseg8e32ff_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x8_t test_vlseg8e32ff_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_f32mf2x8_tu(vd, rs1, new_vl, vl); } -vfloat32m1x8_t test_vlseg8e32ff_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x8_t test_vlseg8e32ff_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_f32m1x8_tu(vd, rs1, new_vl, vl); } -vint32mf2x8_t test_vlseg8e32ff_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x8_t test_vlseg8e32ff_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_i32mf2x8_tu(vd, rs1, new_vl, vl); } -vint32m1x8_t test_vlseg8e32ff_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x8_t test_vlseg8e32ff_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_i32m1x8_tu(vd, rs1, new_vl, vl); } -vuint32mf2x8_t test_vlseg8e32ff_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x8_t test_vlseg8e32ff_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_u32mf2x8_tu(vd, rs1, new_vl, vl); } -vuint32m1x8_t test_vlseg8e32ff_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x8_t test_vlseg8e32ff_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_u32m1x8_tu(vd, rs1, new_vl, vl); } -vfloat32mf2x8_t test_vlseg8e32ff_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x8_t test_vlseg8e32ff_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_f32mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vfloat32m1x8_t test_vlseg8e32ff_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x8_t test_vlseg8e32ff_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_f32m1x8_tum(vm, vd, rs1, new_vl, vl); } -vint32mf2x8_t test_vlseg8e32ff_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x8_t test_vlseg8e32ff_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_i32mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vint32m1x8_t test_vlseg8e32ff_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x8_t test_vlseg8e32ff_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_i32m1x8_tum(vm, vd, rs1, new_vl, vl); } -vuint32mf2x8_t test_vlseg8e32ff_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x8_t test_vlseg8e32ff_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_u32mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vuint32m1x8_t test_vlseg8e32ff_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x8_t test_vlseg8e32ff_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_u32m1x8_tum(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x8_t test_vlseg8e32ff_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x8_t test_vlseg8e32ff_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_f32mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x8_t test_vlseg8e32ff_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x8_t test_vlseg8e32ff_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_f32m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vint32mf2x8_t test_vlseg8e32ff_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x8_t test_vlseg8e32ff_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_i32mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vint32m1x8_t test_vlseg8e32ff_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x8_t test_vlseg8e32ff_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_i32m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x8_t test_vlseg8e32ff_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x8_t test_vlseg8e32ff_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_u32mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint32m1x8_t test_vlseg8e32ff_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x8_t test_vlseg8e32ff_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_u32m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat32mf2x8_t test_vlseg8e32ff_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32mf2x8_t test_vlseg8e32ff_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_f32mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vfloat32m1x8_t test_vlseg8e32ff_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, size_t *new_vl, size_t vl) { +vfloat32m1x8_t test_vlseg8e32ff_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_f32m1x8_mu(vm, vd, rs1, new_vl, vl); } -vint32mf2x8_t test_vlseg8e32ff_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32mf2x8_t test_vlseg8e32ff_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_i32mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vint32m1x8_t test_vlseg8e32ff_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, size_t *new_vl, size_t vl) { +vint32m1x8_t test_vlseg8e32ff_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_i32m1x8_mu(vm, vd, rs1, new_vl, vl); } -vuint32mf2x8_t test_vlseg8e32ff_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32mf2x8_t test_vlseg8e32ff_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e32ff_v_u32mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vuint32m1x8_t test_vlseg8e32ff_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, size_t *new_vl, size_t vl) { +vuint32m1x8_t test_vlseg8e32ff_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e32ff_v_u32m1x8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c index 80212b2d2..f0d04f2c2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c @@ -6,50 +6,62 @@ #include -vfloat64m1x8_t test_vlseg8e64_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, size_t vl) { +vfloat64m1x8_t test_vlseg8e64_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, + size_t vl) { return __riscv_vlseg8e64_v_f64m1x8_tu(vd, rs1, vl); } -vint64m1x8_t test_vlseg8e64_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, size_t vl) { +vint64m1x8_t test_vlseg8e64_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + size_t vl) { return __riscv_vlseg8e64_v_i64m1x8_tu(vd, rs1, vl); } -vuint64m1x8_t test_vlseg8e64_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x8_t test_vlseg8e64_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, + size_t vl) { return __riscv_vlseg8e64_v_u64m1x8_tu(vd, rs1, vl); } -vfloat64m1x8_t test_vlseg8e64_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, size_t vl) { +vfloat64m1x8_t test_vlseg8e64_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg8e64_v_f64m1x8_tum(vm, vd, rs1, vl); } -vint64m1x8_t test_vlseg8e64_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, size_t vl) { +vint64m1x8_t test_vlseg8e64_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg8e64_v_i64m1x8_tum(vm, vd, rs1, vl); } -vuint64m1x8_t test_vlseg8e64_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x8_t test_vlseg8e64_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg8e64_v_u64m1x8_tum(vm, vd, rs1, vl); } -vfloat64m1x8_t test_vlseg8e64_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, size_t vl) { +vfloat64m1x8_t test_vlseg8e64_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg8e64_v_f64m1x8_tumu(vm, vd, rs1, vl); } -vint64m1x8_t test_vlseg8e64_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, size_t vl) { +vint64m1x8_t test_vlseg8e64_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg8e64_v_i64m1x8_tumu(vm, vd, rs1, vl); } -vuint64m1x8_t test_vlseg8e64_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x8_t test_vlseg8e64_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg8e64_v_u64m1x8_tumu(vm, vd, rs1, vl); } -vfloat64m1x8_t test_vlseg8e64_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, size_t vl) { +vfloat64m1x8_t test_vlseg8e64_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, size_t vl) { return __riscv_vlseg8e64_v_f64m1x8_mu(vm, vd, rs1, vl); } -vint64m1x8_t test_vlseg8e64_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, size_t vl) { +vint64m1x8_t test_vlseg8e64_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, size_t vl) { return __riscv_vlseg8e64_v_i64m1x8_mu(vm, vd, rs1, vl); } -vuint64m1x8_t test_vlseg8e64_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, size_t vl) { +vuint64m1x8_t test_vlseg8e64_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, size_t vl) { return __riscv_vlseg8e64_v_u64m1x8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c index 1b8da9adf..113a017a4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c @@ -6,50 +6,73 @@ #include -vfloat64m1x8_t test_vlseg8e64ff_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x8_t test_vlseg8e64ff_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e64ff_v_f64m1x8_tu(vd, rs1, new_vl, vl); } -vint64m1x8_t test_vlseg8e64ff_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x8_t test_vlseg8e64ff_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e64ff_v_i64m1x8_tu(vd, rs1, new_vl, vl); } -vuint64m1x8_t test_vlseg8e64ff_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x8_t test_vlseg8e64ff_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e64ff_v_u64m1x8_tu(vd, rs1, new_vl, vl); } -vfloat64m1x8_t test_vlseg8e64ff_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x8_t test_vlseg8e64ff_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e64ff_v_f64m1x8_tum(vm, vd, rs1, new_vl, vl); } -vint64m1x8_t test_vlseg8e64ff_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x8_t test_vlseg8e64ff_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e64ff_v_i64m1x8_tum(vm, vd, rs1, new_vl, vl); } -vuint64m1x8_t test_vlseg8e64ff_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x8_t test_vlseg8e64ff_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e64ff_v_u64m1x8_tum(vm, vd, rs1, new_vl, vl); } -vfloat64m1x8_t test_vlseg8e64ff_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x8_t test_vlseg8e64ff_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e64ff_v_f64m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vint64m1x8_t test_vlseg8e64ff_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x8_t test_vlseg8e64ff_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e64ff_v_i64m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint64m1x8_t test_vlseg8e64ff_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x8_t test_vlseg8e64ff_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e64ff_v_u64m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vfloat64m1x8_t test_vlseg8e64ff_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, size_t *new_vl, size_t vl) { +vfloat64m1x8_t test_vlseg8e64ff_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e64ff_v_f64m1x8_mu(vm, vd, rs1, new_vl, vl); } -vint64m1x8_t test_vlseg8e64ff_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, size_t *new_vl, size_t vl) { +vint64m1x8_t test_vlseg8e64ff_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e64ff_v_i64m1x8_mu(vm, vd, rs1, new_vl, vl); } -vuint64m1x8_t test_vlseg8e64ff_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, size_t *new_vl, size_t vl) { +vuint64m1x8_t test_vlseg8e64ff_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e64ff_v_u64m1x8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8.c index feb8eb9c3..2d83bb4df 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8.c @@ -5,130 +5,162 @@ #include -vint8mf8x8_t test_vlseg8e8_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x8_t test_vlseg8e8_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg8e8_v_i8mf8x8_tu(vd, rs1, vl); } -vint8mf4x8_t test_vlseg8e8_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x8_t test_vlseg8e8_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg8e8_v_i8mf4x8_tu(vd, rs1, vl); } -vint8mf2x8_t test_vlseg8e8_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x8_t test_vlseg8e8_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg8e8_v_i8mf2x8_tu(vd, rs1, vl); } -vint8m1x8_t test_vlseg8e8_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, size_t vl) { +vint8m1x8_t test_vlseg8e8_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + size_t vl) { return __riscv_vlseg8e8_v_i8m1x8_tu(vd, rs1, vl); } -vuint8mf8x8_t test_vlseg8e8_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x8_t test_vlseg8e8_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg8e8_v_u8mf8x8_tu(vd, rs1, vl); } -vuint8mf4x8_t test_vlseg8e8_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x8_t test_vlseg8e8_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg8e8_v_u8mf4x8_tu(vd, rs1, vl); } -vuint8mf2x8_t test_vlseg8e8_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x8_t test_vlseg8e8_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg8e8_v_u8mf2x8_tu(vd, rs1, vl); } -vuint8m1x8_t test_vlseg8e8_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x8_t test_vlseg8e8_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + size_t vl) { return __riscv_vlseg8e8_v_u8m1x8_tu(vd, rs1, vl); } -vint8mf8x8_t test_vlseg8e8_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x8_t test_vlseg8e8_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf8x8_tum(vm, vd, rs1, vl); } -vint8mf4x8_t test_vlseg8e8_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x8_t test_vlseg8e8_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf4x8_tum(vm, vd, rs1, vl); } -vint8mf2x8_t test_vlseg8e8_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x8_t test_vlseg8e8_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf2x8_tum(vm, vd, rs1, vl); } -vint8m1x8_t test_vlseg8e8_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, size_t vl) { +vint8m1x8_t test_vlseg8e8_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8m1x8_tum(vm, vd, rs1, vl); } -vuint8mf8x8_t test_vlseg8e8_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x8_t test_vlseg8e8_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf8x8_tum(vm, vd, rs1, vl); } -vuint8mf4x8_t test_vlseg8e8_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x8_t test_vlseg8e8_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf4x8_tum(vm, vd, rs1, vl); } -vuint8mf2x8_t test_vlseg8e8_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x8_t test_vlseg8e8_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf2x8_tum(vm, vd, rs1, vl); } -vuint8m1x8_t test_vlseg8e8_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x8_t test_vlseg8e8_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8m1x8_tum(vm, vd, rs1, vl); } -vint8mf8x8_t test_vlseg8e8_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x8_t test_vlseg8e8_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf8x8_tumu(vm, vd, rs1, vl); } -vint8mf4x8_t test_vlseg8e8_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x8_t test_vlseg8e8_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf4x8_tumu(vm, vd, rs1, vl); } -vint8mf2x8_t test_vlseg8e8_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x8_t test_vlseg8e8_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf2x8_tumu(vm, vd, rs1, vl); } -vint8m1x8_t test_vlseg8e8_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, size_t vl) { +vint8m1x8_t test_vlseg8e8_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8m1x8_tumu(vm, vd, rs1, vl); } -vuint8mf8x8_t test_vlseg8e8_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x8_t test_vlseg8e8_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf8x8_tumu(vm, vd, rs1, vl); } -vuint8mf4x8_t test_vlseg8e8_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x8_t test_vlseg8e8_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf4x8_tumu(vm, vd, rs1, vl); } -vuint8mf2x8_t test_vlseg8e8_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x8_t test_vlseg8e8_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf2x8_tumu(vm, vd, rs1, vl); } -vuint8m1x8_t test_vlseg8e8_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x8_t test_vlseg8e8_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8m1x8_tumu(vm, vd, rs1, vl); } -vint8mf8x8_t test_vlseg8e8_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf8x8_t test_vlseg8e8_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf8x8_mu(vm, vd, rs1, vl); } -vint8mf4x8_t test_vlseg8e8_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf4x8_t test_vlseg8e8_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf4x8_mu(vm, vd, rs1, vl); } -vint8mf2x8_t test_vlseg8e8_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, size_t vl) { +vint8mf2x8_t test_vlseg8e8_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8mf2x8_mu(vm, vd, rs1, vl); } -vint8m1x8_t test_vlseg8e8_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, size_t vl) { +vint8m1x8_t test_vlseg8e8_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_i8m1x8_mu(vm, vd, rs1, vl); } -vuint8mf8x8_t test_vlseg8e8_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf8x8_t test_vlseg8e8_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf8x8_mu(vm, vd, rs1, vl); } -vuint8mf4x8_t test_vlseg8e8_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf4x8_t test_vlseg8e8_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf4x8_mu(vm, vd, rs1, vl); } -vuint8mf2x8_t test_vlseg8e8_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8mf2x8_t test_vlseg8e8_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8mf2x8_mu(vm, vd, rs1, vl); } -vuint8m1x8_t test_vlseg8e8_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, size_t vl) { +vuint8m1x8_t test_vlseg8e8_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, size_t vl) { return __riscv_vlseg8e8_v_u8m1x8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c index 9beaac596..8adcc574f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c @@ -6,130 +6,186 @@ #include -vint8mf8x8_t test_vlseg8e8ff_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x8_t test_vlseg8e8ff_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e8ff_v_i8mf8x8_tu(vd, rs1, new_vl, vl); } -vint8mf4x8_t test_vlseg8e8ff_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x8_t test_vlseg8e8ff_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e8ff_v_i8mf4x8_tu(vd, rs1, new_vl, vl); } -vint8mf2x8_t test_vlseg8e8ff_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x8_t test_vlseg8e8ff_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e8ff_v_i8mf2x8_tu(vd, rs1, new_vl, vl); } -vint8m1x8_t test_vlseg8e8ff_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x8_t test_vlseg8e8ff_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e8ff_v_i8m1x8_tu(vd, rs1, new_vl, vl); } -vuint8mf8x8_t test_vlseg8e8ff_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x8_t test_vlseg8e8ff_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e8ff_v_u8mf8x8_tu(vd, rs1, new_vl, vl); } -vuint8mf4x8_t test_vlseg8e8ff_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x8_t test_vlseg8e8ff_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e8ff_v_u8mf4x8_tu(vd, rs1, new_vl, vl); } -vuint8mf2x8_t test_vlseg8e8ff_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x8_t test_vlseg8e8ff_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e8ff_v_u8mf2x8_tu(vd, rs1, new_vl, vl); } -vuint8m1x8_t test_vlseg8e8ff_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x8_t test_vlseg8e8ff_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e8ff_v_u8m1x8_tu(vd, rs1, new_vl, vl); } -vint8mf8x8_t test_vlseg8e8ff_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x8_t test_vlseg8e8ff_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf8x8_tum(vm, vd, rs1, new_vl, vl); } -vint8mf4x8_t test_vlseg8e8ff_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x8_t test_vlseg8e8ff_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf4x8_tum(vm, vd, rs1, new_vl, vl); } -vint8mf2x8_t test_vlseg8e8ff_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x8_t test_vlseg8e8ff_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vint8m1x8_t test_vlseg8e8ff_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x8_t test_vlseg8e8ff_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8m1x8_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf8x8_t test_vlseg8e8ff_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x8_t test_vlseg8e8ff_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf8x8_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf4x8_t test_vlseg8e8ff_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x8_t test_vlseg8e8ff_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf4x8_tum(vm, vd, rs1, new_vl, vl); } -vuint8mf2x8_t test_vlseg8e8ff_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x8_t test_vlseg8e8ff_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vuint8m1x8_t test_vlseg8e8ff_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x8_t test_vlseg8e8ff_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8m1x8_tum(vm, vd, rs1, new_vl, vl); } -vint8mf8x8_t test_vlseg8e8ff_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x8_t test_vlseg8e8ff_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf8x8_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf4x8_t test_vlseg8e8ff_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x8_t test_vlseg8e8ff_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf4x8_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf2x8_t test_vlseg8e8ff_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x8_t test_vlseg8e8ff_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vint8m1x8_t test_vlseg8e8ff_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x8_t test_vlseg8e8ff_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x8_t test_vlseg8e8ff_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x8_t test_vlseg8e8ff_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf8x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x8_t test_vlseg8e8ff_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x8_t test_vlseg8e8ff_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf4x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x8_t test_vlseg8e8ff_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x8_t test_vlseg8e8ff_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vuint8m1x8_t test_vlseg8e8ff_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x8_t test_vlseg8e8ff_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vint8mf8x8_t test_vlseg8e8ff_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf8x8_t test_vlseg8e8ff_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf8x8_mu(vm, vd, rs1, new_vl, vl); } -vint8mf4x8_t test_vlseg8e8ff_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf4x8_t test_vlseg8e8ff_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf4x8_mu(vm, vd, rs1, new_vl, vl); } -vint8mf2x8_t test_vlseg8e8ff_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8mf2x8_t test_vlseg8e8ff_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vint8m1x8_t test_vlseg8e8ff_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, size_t *new_vl, size_t vl) { +vint8m1x8_t test_vlseg8e8ff_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_i8m1x8_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf8x8_t test_vlseg8e8ff_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf8x8_t test_vlseg8e8ff_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf8x8_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf4x8_t test_vlseg8e8ff_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf4x8_t test_vlseg8e8ff_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf4x8_mu(vm, vd, rs1, new_vl, vl); } -vuint8mf2x8_t test_vlseg8e8ff_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8mf2x8_t test_vlseg8e8ff_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vuint8m1x8_t test_vlseg8e8ff_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, size_t *new_vl, size_t vl) { +vuint8m1x8_t test_vlseg8e8ff_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, size_t *new_vl, + size_t vl) { return __riscv_vlseg8e8ff_v_u8m1x8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c index a4ee9618a..de50cd9fb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c @@ -6,242 +6,361 @@ #include -vfloat16mf4x2_t test_vlsseg2e16_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x2_t test_vlsseg2e16_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vlsseg2e16_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x2_t test_vlsseg2e16_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vlsseg2e16_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x2_t test_vlsseg2e16_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vlsseg2e16_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x2_t test_vlsseg2e16_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vlsseg2e16_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m4x2_t test_vlsseg2e16_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m4x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vlsseg2e16_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x2_t test_vlsseg2e16_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vlsseg2e16_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x2_t test_vlsseg2e16_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vlsseg2e16_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x2_t test_vlsseg2e16_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vlsseg2e16_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x2_t test_vlsseg2e16_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint16m4x2_t test_vlsseg2e16_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m4x2_t test_vlsseg2e16_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_i16m4x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vlsseg2e16_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x2_t test_vlsseg2e16_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vlsseg2e16_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x2_t test_vlsseg2e16_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vlsseg2e16_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x2_t test_vlsseg2e16_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vlsseg2e16_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x2_t test_vlsseg2e16_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint16m4x2_t test_vlsseg2e16_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m4x2_t test_vlsseg2e16_v_u16m4x2_tu(vuint16m4x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vlsseg2e16_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x2_t test_vlsseg2e16_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vlsseg2e16_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x2_t test_vlsseg2e16_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vlsseg2e16_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x2_t test_vlsseg2e16_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vlsseg2e16_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x2_t test_vlsseg2e16_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vlsseg2e16_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m4x2_t test_vlsseg2e16_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vlsseg2e16_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x2_t test_vlsseg2e16_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vlsseg2e16_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x2_t test_vlsseg2e16_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vlsseg2e16_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x2_t test_vlsseg2e16_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vlsseg2e16_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x2_t test_vlsseg2e16_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vlsseg2e16_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m4x2_t test_vlsseg2e16_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vlsseg2e16_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x2_t test_vlsseg2e16_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vlsseg2e16_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x2_t test_vlsseg2e16_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vlsseg2e16_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x2_t test_vlsseg2e16_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vlsseg2e16_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x2_t test_vlsseg2e16_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vlsseg2e16_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m4x2_t test_vlsseg2e16_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vlsseg2e16_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x2_t test_vlsseg2e16_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vlsseg2e16_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x2_t test_vlsseg2e16_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vlsseg2e16_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x2_t test_vlsseg2e16_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vlsseg2e16_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x2_t test_vlsseg2e16_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vlsseg2e16_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m4x2_t test_vlsseg2e16_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vlsseg2e16_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x2_t test_vlsseg2e16_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vlsseg2e16_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x2_t test_vlsseg2e16_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vlsseg2e16_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x2_t test_vlsseg2e16_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vlsseg2e16_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x2_t test_vlsseg2e16_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vlsseg2e16_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m4x2_t test_vlsseg2e16_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vlsseg2e16_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x2_t test_vlsseg2e16_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vlsseg2e16_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x2_t test_vlsseg2e16_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vlsseg2e16_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x2_t test_vlsseg2e16_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vlsseg2e16_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x2_t test_vlsseg2e16_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vlsseg2e16_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m4x2_t test_vlsseg2e16_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vlsseg2e16_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x2_t test_vlsseg2e16_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vlsseg2e16_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x2_t test_vlsseg2e16_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vlsseg2e16_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x2_t test_vlsseg2e16_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vlsseg2e16_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x2_t test_vlsseg2e16_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vlsseg2e16_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m4x2_t test_vlsseg2e16_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_f16m4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vlsseg2e16_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x2_t test_vlsseg2e16_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vlsseg2e16_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x2_t test_vlsseg2e16_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vlsseg2e16_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x2_t test_vlsseg2e16_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vlsseg2e16_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x2_t test_vlsseg2e16_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vlsseg2e16_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m4x2_t test_vlsseg2e16_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_i16m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vlsseg2e16_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x2_t test_vlsseg2e16_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vlsseg2e16_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x2_t test_vlsseg2e16_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vlsseg2e16_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x2_t test_vlsseg2e16_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vlsseg2e16_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x2_t test_vlsseg2e16_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vlsseg2e16_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m4x2_t test_vlsseg2e16_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_u16m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c index 28b15ebfb..c77dd94ab 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c @@ -6,194 +6,285 @@ #include -vfloat32mf2x2_t test_vlsseg2e32_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x2_t test_vlsseg2e32_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vlsseg2e32_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x2_t test_vlsseg2e32_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e32_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vlsseg2e32_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x2_t test_vlsseg2e32_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e32_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vlsseg2e32_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m4x2_t test_vlsseg2e32_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e32_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vlsseg2e32_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x2_t test_vlsseg2e32_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vlsseg2e32_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x2_t test_vlsseg2e32_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e32_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vlsseg2e32_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x2_t test_vlsseg2e32_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e32_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vlsseg2e32_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m4x2_t test_vlsseg2e32_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e32_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vlsseg2e32_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x2_t test_vlsseg2e32_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vlsseg2e32_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x2_t test_vlsseg2e32_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vlsseg2e32_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x2_t test_vlsseg2e32_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vlsseg2e32_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m4x2_t test_vlsseg2e32_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vlsseg2e32_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x2_t test_vlsseg2e32_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vlsseg2e32_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x2_t test_vlsseg2e32_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vlsseg2e32_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x2_t test_vlsseg2e32_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vlsseg2e32_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m4x2_t test_vlsseg2e32_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vlsseg2e32_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x2_t test_vlsseg2e32_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vlsseg2e32_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x2_t test_vlsseg2e32_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vlsseg2e32_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x2_t test_vlsseg2e32_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vlsseg2e32_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m4x2_t test_vlsseg2e32_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vlsseg2e32_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x2_t test_vlsseg2e32_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e32_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vlsseg2e32_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x2_t test_vlsseg2e32_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vlsseg2e32_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x2_t test_vlsseg2e32_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vlsseg2e32_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m4x2_t test_vlsseg2e32_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vlsseg2e32_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x2_t test_vlsseg2e32_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vlsseg2e32_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x2_t test_vlsseg2e32_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vlsseg2e32_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x2_t test_vlsseg2e32_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vlsseg2e32_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m4x2_t test_vlsseg2e32_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vlsseg2e32_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x2_t test_vlsseg2e32_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vlsseg2e32_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x2_t test_vlsseg2e32_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vlsseg2e32_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x2_t test_vlsseg2e32_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vlsseg2e32_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m4x2_t test_vlsseg2e32_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vlsseg2e32_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x2_t test_vlsseg2e32_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e32_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vlsseg2e32_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x2_t test_vlsseg2e32_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vlsseg2e32_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x2_t test_vlsseg2e32_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vlsseg2e32_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m4x2_t test_vlsseg2e32_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vlsseg2e32_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x2_t test_vlsseg2e32_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vlsseg2e32_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x2_t test_vlsseg2e32_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vlsseg2e32_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x2_t test_vlsseg2e32_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vlsseg2e32_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m4x2_t test_vlsseg2e32_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vlsseg2e32_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x2_t test_vlsseg2e32_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vlsseg2e32_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x2_t test_vlsseg2e32_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vlsseg2e32_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x2_t test_vlsseg2e32_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vlsseg2e32_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m4x2_t test_vlsseg2e32_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vlsseg2e32_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x2_t test_vlsseg2e32_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vlsseg2e32_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x2_t test_vlsseg2e32_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vlsseg2e32_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x2_t test_vlsseg2e32_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vlsseg2e32_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m4x2_t test_vlsseg2e32_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e32_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c index f79ae8169..2910058f7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c @@ -6,146 +6,215 @@ #include -vfloat64m1x2_t test_vlsseg2e64_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x2_t test_vlsseg2e64_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vlsseg2e64_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x2_t test_vlsseg2e64_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vlsseg2e64_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m4x2_t test_vlsseg2e64_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vlsseg2e64_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x2_t test_vlsseg2e64_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e64_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vlsseg2e64_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x2_t test_vlsseg2e64_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e64_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vlsseg2e64_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m4x2_t test_vlsseg2e64_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e64_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vlsseg2e64_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x2_t test_vlsseg2e64_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vlsseg2e64_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x2_t test_vlsseg2e64_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vlsseg2e64_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m4x2_t test_vlsseg2e64_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vlsseg2e64_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x2_t test_vlsseg2e64_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vlsseg2e64_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x2_t test_vlsseg2e64_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vlsseg2e64_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m4x2_t test_vlsseg2e64_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vlsseg2e64_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x2_t test_vlsseg2e64_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vlsseg2e64_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x2_t test_vlsseg2e64_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vlsseg2e64_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m4x2_t test_vlsseg2e64_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vlsseg2e64_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x2_t test_vlsseg2e64_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vlsseg2e64_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x2_t test_vlsseg2e64_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vlsseg2e64_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m4x2_t test_vlsseg2e64_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vlsseg2e64_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x2_t test_vlsseg2e64_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vlsseg2e64_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x2_t test_vlsseg2e64_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vlsseg2e64_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m4x2_t test_vlsseg2e64_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vlsseg2e64_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x2_t test_vlsseg2e64_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vlsseg2e64_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x2_t test_vlsseg2e64_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vlsseg2e64_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m4x2_t test_vlsseg2e64_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vlsseg2e64_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x2_t test_vlsseg2e64_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vlsseg2e64_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x2_t test_vlsseg2e64_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vlsseg2e64_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m4x2_t test_vlsseg2e64_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vlsseg2e64_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x2_t test_vlsseg2e64_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vlsseg2e64_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x2_t test_vlsseg2e64_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vlsseg2e64_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m4x2_t test_vlsseg2e64_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vlsseg2e64_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x2_t test_vlsseg2e64_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vlsseg2e64_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x2_t test_vlsseg2e64_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vlsseg2e64_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m4x2_t test_vlsseg2e64_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vlsseg2e64_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x2_t test_vlsseg2e64_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vlsseg2e64_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x2_t test_vlsseg2e64_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vlsseg2e64_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m4x2_t test_vlsseg2e64_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e64_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e8.c index fd727d94b..d9a0b00d7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e8.c @@ -5,194 +5,278 @@ #include -vint8mf8x2_t test_vlsseg2e8_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x2_t test_vlsseg2e8_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vlsseg2e8_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x2_t test_vlsseg2e8_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vlsseg2e8_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x2_t test_vlsseg2e8_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vlsseg2e8_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x2_t test_vlsseg2e8_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint8m2x2_t test_vlsseg2e8_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x2_t test_vlsseg2e8_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_i8m2x2_tu(vd, rs1, rs2, vl); } -vint8m4x2_t test_vlsseg2e8_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m4x2_t test_vlsseg2e8_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_i8m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vlsseg2e8_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x2_t test_vlsseg2e8_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vlsseg2e8_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x2_t test_vlsseg2e8_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vlsseg2e8_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x2_t test_vlsseg2e8_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vlsseg2e8_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x2_t test_vlsseg2e8_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint8m2x2_t test_vlsseg2e8_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x2_t test_vlsseg2e8_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_u8m2x2_tu(vd, rs1, rs2, vl); } -vuint8m4x2_t test_vlsseg2e8_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m4x2_t test_vlsseg2e8_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e8_v_u8m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vlsseg2e8_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x2_t test_vlsseg2e8_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vlsseg2e8_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x2_t test_vlsseg2e8_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vlsseg2e8_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x2_t test_vlsseg2e8_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vlsseg2e8_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x2_t test_vlsseg2e8_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vlsseg2e8_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x2_t test_vlsseg2e8_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vlsseg2e8_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m4x2_t test_vlsseg2e8_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vlsseg2e8_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x2_t test_vlsseg2e8_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vlsseg2e8_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x2_t test_vlsseg2e8_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vlsseg2e8_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x2_t test_vlsseg2e8_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vlsseg2e8_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x2_t test_vlsseg2e8_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vlsseg2e8_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x2_t test_vlsseg2e8_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vlsseg2e8_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m4x2_t test_vlsseg2e8_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vlsseg2e8_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x2_t test_vlsseg2e8_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vlsseg2e8_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x2_t test_vlsseg2e8_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vlsseg2e8_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x2_t test_vlsseg2e8_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vlsseg2e8_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x2_t test_vlsseg2e8_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vlsseg2e8_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x2_t test_vlsseg2e8_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vlsseg2e8_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m4x2_t test_vlsseg2e8_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vlsseg2e8_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x2_t test_vlsseg2e8_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vlsseg2e8_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x2_t test_vlsseg2e8_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vlsseg2e8_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x2_t test_vlsseg2e8_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vlsseg2e8_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x2_t test_vlsseg2e8_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vlsseg2e8_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x2_t test_vlsseg2e8_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vlsseg2e8_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m4x2_t test_vlsseg2e8_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vlsseg2e8_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x2_t test_vlsseg2e8_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vlsseg2e8_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x2_t test_vlsseg2e8_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vlsseg2e8_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x2_t test_vlsseg2e8_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vlsseg2e8_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x2_t test_vlsseg2e8_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vlsseg2e8_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x2_t test_vlsseg2e8_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vlsseg2e8_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m4x2_t test_vlsseg2e8_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_i8m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vlsseg2e8_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x2_t test_vlsseg2e8_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vlsseg2e8_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x2_t test_vlsseg2e8_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vlsseg2e8_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x2_t test_vlsseg2e8_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vlsseg2e8_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x2_t test_vlsseg2e8_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vlsseg2e8_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x2_t test_vlsseg2e8_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vlsseg2e8_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m4x2_t test_vlsseg2e8_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e8_v_u8m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c index cc8a0ac9d..730a9b93e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c @@ -6,194 +6,290 @@ #include -vfloat16mf4x3_t test_vlsseg3e16_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x3_t test_vlsseg3e16_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vlsseg3e16_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x3_t test_vlsseg3e16_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vlsseg3e16_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x3_t test_vlsseg3e16_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vlsseg3e16_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x3_t test_vlsseg3e16_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vlsseg3e16_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x3_t test_vlsseg3e16_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vlsseg3e16_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x3_t test_vlsseg3e16_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vlsseg3e16_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x3_t test_vlsseg3e16_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vlsseg3e16_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x3_t test_vlsseg3e16_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vlsseg3e16_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x3_t test_vlsseg3e16_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vlsseg3e16_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x3_t test_vlsseg3e16_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vlsseg3e16_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x3_t test_vlsseg3e16_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vlsseg3e16_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x3_t test_vlsseg3e16_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vlsseg3e16_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x3_t test_vlsseg3e16_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vlsseg3e16_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x3_t test_vlsseg3e16_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vlsseg3e16_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x3_t test_vlsseg3e16_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vlsseg3e16_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x3_t test_vlsseg3e16_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vlsseg3e16_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x3_t test_vlsseg3e16_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vlsseg3e16_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x3_t test_vlsseg3e16_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vlsseg3e16_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x3_t test_vlsseg3e16_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vlsseg3e16_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x3_t test_vlsseg3e16_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vlsseg3e16_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x3_t test_vlsseg3e16_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vlsseg3e16_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x3_t test_vlsseg3e16_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vlsseg3e16_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x3_t test_vlsseg3e16_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vlsseg3e16_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x3_t test_vlsseg3e16_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vlsseg3e16_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x3_t test_vlsseg3e16_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vlsseg3e16_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x3_t test_vlsseg3e16_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vlsseg3e16_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x3_t test_vlsseg3e16_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vlsseg3e16_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x3_t test_vlsseg3e16_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vlsseg3e16_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x3_t test_vlsseg3e16_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vlsseg3e16_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x3_t test_vlsseg3e16_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vlsseg3e16_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x3_t test_vlsseg3e16_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vlsseg3e16_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x3_t test_vlsseg3e16_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vlsseg3e16_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x3_t test_vlsseg3e16_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vlsseg3e16_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x3_t test_vlsseg3e16_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vlsseg3e16_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x3_t test_vlsseg3e16_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vlsseg3e16_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x3_t test_vlsseg3e16_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vlsseg3e16_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x3_t test_vlsseg3e16_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vlsseg3e16_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x3_t test_vlsseg3e16_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vlsseg3e16_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x3_t test_vlsseg3e16_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vlsseg3e16_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x3_t test_vlsseg3e16_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vlsseg3e16_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x3_t test_vlsseg3e16_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vlsseg3e16_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x3_t test_vlsseg3e16_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vlsseg3e16_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x3_t test_vlsseg3e16_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vlsseg3e16_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x3_t test_vlsseg3e16_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vlsseg3e16_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x3_t test_vlsseg3e16_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vlsseg3e16_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x3_t test_vlsseg3e16_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vlsseg3e16_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x3_t test_vlsseg3e16_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vlsseg3e16_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x3_t test_vlsseg3e16_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c index 89e170b8e..2a2e627d2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c @@ -6,146 +6,215 @@ #include -vfloat32mf2x3_t test_vlsseg3e32_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x3_t test_vlsseg3e32_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vlsseg3e32_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x3_t test_vlsseg3e32_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e32_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vlsseg3e32_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x3_t test_vlsseg3e32_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e32_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vlsseg3e32_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x3_t test_vlsseg3e32_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vlsseg3e32_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x3_t test_vlsseg3e32_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e32_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vlsseg3e32_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x3_t test_vlsseg3e32_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e32_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vlsseg3e32_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x3_t test_vlsseg3e32_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vlsseg3e32_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x3_t test_vlsseg3e32_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vlsseg3e32_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x3_t test_vlsseg3e32_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vlsseg3e32_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x3_t test_vlsseg3e32_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vlsseg3e32_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x3_t test_vlsseg3e32_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vlsseg3e32_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x3_t test_vlsseg3e32_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vlsseg3e32_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x3_t test_vlsseg3e32_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vlsseg3e32_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x3_t test_vlsseg3e32_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vlsseg3e32_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x3_t test_vlsseg3e32_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vlsseg3e32_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x3_t test_vlsseg3e32_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e32_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vlsseg3e32_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x3_t test_vlsseg3e32_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vlsseg3e32_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x3_t test_vlsseg3e32_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vlsseg3e32_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x3_t test_vlsseg3e32_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vlsseg3e32_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x3_t test_vlsseg3e32_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vlsseg3e32_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x3_t test_vlsseg3e32_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vlsseg3e32_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x3_t test_vlsseg3e32_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vlsseg3e32_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x3_t test_vlsseg3e32_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vlsseg3e32_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x3_t test_vlsseg3e32_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vlsseg3e32_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x3_t test_vlsseg3e32_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e32_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vlsseg3e32_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x3_t test_vlsseg3e32_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vlsseg3e32_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x3_t test_vlsseg3e32_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vlsseg3e32_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x3_t test_vlsseg3e32_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vlsseg3e32_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x3_t test_vlsseg3e32_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vlsseg3e32_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x3_t test_vlsseg3e32_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vlsseg3e32_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x3_t test_vlsseg3e32_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vlsseg3e32_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x3_t test_vlsseg3e32_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vlsseg3e32_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x3_t test_vlsseg3e32_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vlsseg3e32_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x3_t test_vlsseg3e32_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vlsseg3e32_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x3_t test_vlsseg3e32_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vlsseg3e32_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x3_t test_vlsseg3e32_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e32_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c index c79dd026f..678239a3b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c @@ -6,98 +6,144 @@ #include -vfloat64m1x3_t test_vlsseg3e64_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x3_t test_vlsseg3e64_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vlsseg3e64_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x3_t test_vlsseg3e64_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vlsseg3e64_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x3_t test_vlsseg3e64_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e64_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vlsseg3e64_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x3_t test_vlsseg3e64_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e64_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vlsseg3e64_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x3_t test_vlsseg3e64_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vlsseg3e64_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x3_t test_vlsseg3e64_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vlsseg3e64_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x3_t test_vlsseg3e64_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vlsseg3e64_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x3_t test_vlsseg3e64_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vlsseg3e64_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x3_t test_vlsseg3e64_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vlsseg3e64_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x3_t test_vlsseg3e64_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vlsseg3e64_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x3_t test_vlsseg3e64_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vlsseg3e64_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x3_t test_vlsseg3e64_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vlsseg3e64_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x3_t test_vlsseg3e64_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vlsseg3e64_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x3_t test_vlsseg3e64_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vlsseg3e64_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x3_t test_vlsseg3e64_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vlsseg3e64_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x3_t test_vlsseg3e64_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vlsseg3e64_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x3_t test_vlsseg3e64_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vlsseg3e64_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x3_t test_vlsseg3e64_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vlsseg3e64_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x3_t test_vlsseg3e64_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vlsseg3e64_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x3_t test_vlsseg3e64_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vlsseg3e64_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x3_t test_vlsseg3e64_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vlsseg3e64_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x3_t test_vlsseg3e64_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vlsseg3e64_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x3_t test_vlsseg3e64_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vlsseg3e64_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x3_t test_vlsseg3e64_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e64_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e8.c index 21ce67d95..f3e67151a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e8.c @@ -5,162 +5,232 @@ #include -vint8mf8x3_t test_vlsseg3e8_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x3_t test_vlsseg3e8_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vlsseg3e8_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x3_t test_vlsseg3e8_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vlsseg3e8_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x3_t test_vlsseg3e8_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vlsseg3e8_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x3_t test_vlsseg3e8_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint8m2x3_t test_vlsseg3e8_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x3_t test_vlsseg3e8_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_i8m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vlsseg3e8_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x3_t test_vlsseg3e8_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vlsseg3e8_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x3_t test_vlsseg3e8_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vlsseg3e8_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x3_t test_vlsseg3e8_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vlsseg3e8_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x3_t test_vlsseg3e8_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint8m2x3_t test_vlsseg3e8_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x3_t test_vlsseg3e8_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e8_v_u8m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vlsseg3e8_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x3_t test_vlsseg3e8_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vlsseg3e8_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x3_t test_vlsseg3e8_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vlsseg3e8_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x3_t test_vlsseg3e8_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vlsseg3e8_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x3_t test_vlsseg3e8_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vlsseg3e8_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x3_t test_vlsseg3e8_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vlsseg3e8_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x3_t test_vlsseg3e8_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vlsseg3e8_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x3_t test_vlsseg3e8_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vlsseg3e8_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x3_t test_vlsseg3e8_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vlsseg3e8_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x3_t test_vlsseg3e8_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vlsseg3e8_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x3_t test_vlsseg3e8_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vlsseg3e8_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x3_t test_vlsseg3e8_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vlsseg3e8_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x3_t test_vlsseg3e8_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vlsseg3e8_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x3_t test_vlsseg3e8_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vlsseg3e8_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x3_t test_vlsseg3e8_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vlsseg3e8_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x3_t test_vlsseg3e8_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vlsseg3e8_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x3_t test_vlsseg3e8_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vlsseg3e8_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x3_t test_vlsseg3e8_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vlsseg3e8_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x3_t test_vlsseg3e8_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vlsseg3e8_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x3_t test_vlsseg3e8_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vlsseg3e8_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x3_t test_vlsseg3e8_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vlsseg3e8_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x3_t test_vlsseg3e8_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vlsseg3e8_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x3_t test_vlsseg3e8_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vlsseg3e8_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x3_t test_vlsseg3e8_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vlsseg3e8_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x3_t test_vlsseg3e8_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vlsseg3e8_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x3_t test_vlsseg3e8_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_i8m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vlsseg3e8_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x3_t test_vlsseg3e8_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vlsseg3e8_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x3_t test_vlsseg3e8_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vlsseg3e8_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x3_t test_vlsseg3e8_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vlsseg3e8_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x3_t test_vlsseg3e8_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vlsseg3e8_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x3_t test_vlsseg3e8_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e8_v_u8m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c index 25e195117..277c2312a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c @@ -6,194 +6,290 @@ #include -vfloat16mf4x4_t test_vlsseg4e16_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x4_t test_vlsseg4e16_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vlsseg4e16_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x4_t test_vlsseg4e16_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vlsseg4e16_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x4_t test_vlsseg4e16_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vlsseg4e16_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x4_t test_vlsseg4e16_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vlsseg4e16_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x4_t test_vlsseg4e16_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vlsseg4e16_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x4_t test_vlsseg4e16_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vlsseg4e16_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x4_t test_vlsseg4e16_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vlsseg4e16_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x4_t test_vlsseg4e16_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vlsseg4e16_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x4_t test_vlsseg4e16_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vlsseg4e16_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x4_t test_vlsseg4e16_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vlsseg4e16_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x4_t test_vlsseg4e16_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vlsseg4e16_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x4_t test_vlsseg4e16_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vlsseg4e16_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x4_t test_vlsseg4e16_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vlsseg4e16_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x4_t test_vlsseg4e16_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vlsseg4e16_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x4_t test_vlsseg4e16_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vlsseg4e16_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x4_t test_vlsseg4e16_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vlsseg4e16_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x4_t test_vlsseg4e16_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vlsseg4e16_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x4_t test_vlsseg4e16_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vlsseg4e16_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x4_t test_vlsseg4e16_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vlsseg4e16_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x4_t test_vlsseg4e16_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vlsseg4e16_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x4_t test_vlsseg4e16_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vlsseg4e16_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x4_t test_vlsseg4e16_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vlsseg4e16_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x4_t test_vlsseg4e16_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vlsseg4e16_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x4_t test_vlsseg4e16_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vlsseg4e16_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x4_t test_vlsseg4e16_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vlsseg4e16_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x4_t test_vlsseg4e16_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vlsseg4e16_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x4_t test_vlsseg4e16_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vlsseg4e16_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x4_t test_vlsseg4e16_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vlsseg4e16_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x4_t test_vlsseg4e16_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vlsseg4e16_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x4_t test_vlsseg4e16_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vlsseg4e16_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x4_t test_vlsseg4e16_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vlsseg4e16_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x4_t test_vlsseg4e16_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vlsseg4e16_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x4_t test_vlsseg4e16_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vlsseg4e16_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x4_t test_vlsseg4e16_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vlsseg4e16_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x4_t test_vlsseg4e16_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vlsseg4e16_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x4_t test_vlsseg4e16_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vlsseg4e16_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x4_t test_vlsseg4e16_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vlsseg4e16_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x4_t test_vlsseg4e16_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vlsseg4e16_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x4_t test_vlsseg4e16_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vlsseg4e16_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m2x4_t test_vlsseg4e16_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vlsseg4e16_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x4_t test_vlsseg4e16_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vlsseg4e16_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x4_t test_vlsseg4e16_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vlsseg4e16_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x4_t test_vlsseg4e16_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vlsseg4e16_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m2x4_t test_vlsseg4e16_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vlsseg4e16_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x4_t test_vlsseg4e16_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vlsseg4e16_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x4_t test_vlsseg4e16_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vlsseg4e16_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x4_t test_vlsseg4e16_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vlsseg4e16_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m2x4_t test_vlsseg4e16_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c index f2fdfcd1e..0c2d5ef3e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c @@ -6,146 +6,215 @@ #include -vfloat32mf2x4_t test_vlsseg4e32_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x4_t test_vlsseg4e32_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vlsseg4e32_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x4_t test_vlsseg4e32_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e32_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vlsseg4e32_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x4_t test_vlsseg4e32_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e32_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vlsseg4e32_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x4_t test_vlsseg4e32_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vlsseg4e32_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x4_t test_vlsseg4e32_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e32_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vlsseg4e32_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x4_t test_vlsseg4e32_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e32_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vlsseg4e32_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x4_t test_vlsseg4e32_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vlsseg4e32_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x4_t test_vlsseg4e32_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vlsseg4e32_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x4_t test_vlsseg4e32_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vlsseg4e32_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x4_t test_vlsseg4e32_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vlsseg4e32_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x4_t test_vlsseg4e32_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vlsseg4e32_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x4_t test_vlsseg4e32_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vlsseg4e32_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x4_t test_vlsseg4e32_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vlsseg4e32_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x4_t test_vlsseg4e32_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vlsseg4e32_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x4_t test_vlsseg4e32_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vlsseg4e32_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x4_t test_vlsseg4e32_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e32_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vlsseg4e32_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x4_t test_vlsseg4e32_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vlsseg4e32_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x4_t test_vlsseg4e32_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vlsseg4e32_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x4_t test_vlsseg4e32_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vlsseg4e32_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x4_t test_vlsseg4e32_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vlsseg4e32_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x4_t test_vlsseg4e32_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vlsseg4e32_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x4_t test_vlsseg4e32_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vlsseg4e32_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x4_t test_vlsseg4e32_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vlsseg4e32_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x4_t test_vlsseg4e32_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vlsseg4e32_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x4_t test_vlsseg4e32_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e32_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vlsseg4e32_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x4_t test_vlsseg4e32_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vlsseg4e32_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x4_t test_vlsseg4e32_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vlsseg4e32_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x4_t test_vlsseg4e32_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vlsseg4e32_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x4_t test_vlsseg4e32_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vlsseg4e32_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m2x4_t test_vlsseg4e32_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vlsseg4e32_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x4_t test_vlsseg4e32_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vlsseg4e32_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x4_t test_vlsseg4e32_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vlsseg4e32_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m2x4_t test_vlsseg4e32_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vlsseg4e32_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x4_t test_vlsseg4e32_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vlsseg4e32_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x4_t test_vlsseg4e32_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vlsseg4e32_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m2x4_t test_vlsseg4e32_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e32_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c index 2b846b6ce..56a04c418 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c @@ -6,98 +6,144 @@ #include -vfloat64m1x4_t test_vlsseg4e64_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x4_t test_vlsseg4e64_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vlsseg4e64_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x4_t test_vlsseg4e64_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vlsseg4e64_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x4_t test_vlsseg4e64_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e64_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vlsseg4e64_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x4_t test_vlsseg4e64_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e64_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vlsseg4e64_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x4_t test_vlsseg4e64_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vlsseg4e64_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x4_t test_vlsseg4e64_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vlsseg4e64_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x4_t test_vlsseg4e64_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vlsseg4e64_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x4_t test_vlsseg4e64_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vlsseg4e64_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x4_t test_vlsseg4e64_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vlsseg4e64_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x4_t test_vlsseg4e64_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vlsseg4e64_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x4_t test_vlsseg4e64_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vlsseg4e64_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x4_t test_vlsseg4e64_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vlsseg4e64_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x4_t test_vlsseg4e64_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vlsseg4e64_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x4_t test_vlsseg4e64_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vlsseg4e64_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x4_t test_vlsseg4e64_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vlsseg4e64_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x4_t test_vlsseg4e64_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vlsseg4e64_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x4_t test_vlsseg4e64_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vlsseg4e64_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x4_t test_vlsseg4e64_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vlsseg4e64_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x4_t test_vlsseg4e64_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vlsseg4e64_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m2x4_t test_vlsseg4e64_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vlsseg4e64_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x4_t test_vlsseg4e64_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vlsseg4e64_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m2x4_t test_vlsseg4e64_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vlsseg4e64_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x4_t test_vlsseg4e64_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vlsseg4e64_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m2x4_t test_vlsseg4e64_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e64_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e8.c index 13bc0efe7..a450d8181 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e8.c @@ -5,162 +5,232 @@ #include -vint8mf8x4_t test_vlsseg4e8_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x4_t test_vlsseg4e8_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vlsseg4e8_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x4_t test_vlsseg4e8_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vlsseg4e8_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x4_t test_vlsseg4e8_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vlsseg4e8_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x4_t test_vlsseg4e8_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint8m2x4_t test_vlsseg4e8_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x4_t test_vlsseg4e8_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_i8m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vlsseg4e8_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x4_t test_vlsseg4e8_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vlsseg4e8_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x4_t test_vlsseg4e8_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vlsseg4e8_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x4_t test_vlsseg4e8_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vlsseg4e8_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x4_t test_vlsseg4e8_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint8m2x4_t test_vlsseg4e8_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x4_t test_vlsseg4e8_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e8_v_u8m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vlsseg4e8_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x4_t test_vlsseg4e8_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vlsseg4e8_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x4_t test_vlsseg4e8_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vlsseg4e8_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x4_t test_vlsseg4e8_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vlsseg4e8_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x4_t test_vlsseg4e8_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vlsseg4e8_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x4_t test_vlsseg4e8_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vlsseg4e8_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x4_t test_vlsseg4e8_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vlsseg4e8_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x4_t test_vlsseg4e8_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vlsseg4e8_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x4_t test_vlsseg4e8_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vlsseg4e8_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x4_t test_vlsseg4e8_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vlsseg4e8_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x4_t test_vlsseg4e8_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vlsseg4e8_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x4_t test_vlsseg4e8_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vlsseg4e8_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x4_t test_vlsseg4e8_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vlsseg4e8_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x4_t test_vlsseg4e8_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vlsseg4e8_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x4_t test_vlsseg4e8_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vlsseg4e8_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x4_t test_vlsseg4e8_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vlsseg4e8_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x4_t test_vlsseg4e8_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vlsseg4e8_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x4_t test_vlsseg4e8_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vlsseg4e8_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x4_t test_vlsseg4e8_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vlsseg4e8_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x4_t test_vlsseg4e8_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vlsseg4e8_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x4_t test_vlsseg4e8_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vlsseg4e8_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x4_t test_vlsseg4e8_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vlsseg4e8_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x4_t test_vlsseg4e8_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vlsseg4e8_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x4_t test_vlsseg4e8_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vlsseg4e8_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x4_t test_vlsseg4e8_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vlsseg4e8_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m2x4_t test_vlsseg4e8_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_i8m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vlsseg4e8_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x4_t test_vlsseg4e8_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vlsseg4e8_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x4_t test_vlsseg4e8_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vlsseg4e8_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x4_t test_vlsseg4e8_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vlsseg4e8_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x4_t test_vlsseg4e8_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vlsseg4e8_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m2x4_t test_vlsseg4e8_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e8_v_u8m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c index 99bedcb3a..b72f11ea7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c @@ -6,146 +6,219 @@ #include -vfloat16mf4x5_t test_vlsseg5e16_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x5_t test_vlsseg5e16_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vlsseg5e16_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x5_t test_vlsseg5e16_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vlsseg5e16_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x5_t test_vlsseg5e16_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vlsseg5e16_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x5_t test_vlsseg5e16_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vlsseg5e16_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x5_t test_vlsseg5e16_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vlsseg5e16_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x5_t test_vlsseg5e16_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vlsseg5e16_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x5_t test_vlsseg5e16_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vlsseg5e16_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x5_t test_vlsseg5e16_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vlsseg5e16_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x5_t test_vlsseg5e16_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vlsseg5e16_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x5_t test_vlsseg5e16_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vlsseg5e16_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x5_t test_vlsseg5e16_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vlsseg5e16_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x5_t test_vlsseg5e16_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vlsseg5e16_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x5_t test_vlsseg5e16_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vlsseg5e16_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x5_t test_vlsseg5e16_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vlsseg5e16_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x5_t test_vlsseg5e16_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vlsseg5e16_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x5_t test_vlsseg5e16_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vlsseg5e16_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x5_t test_vlsseg5e16_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vlsseg5e16_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x5_t test_vlsseg5e16_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vlsseg5e16_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x5_t test_vlsseg5e16_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vlsseg5e16_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x5_t test_vlsseg5e16_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vlsseg5e16_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x5_t test_vlsseg5e16_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vlsseg5e16_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x5_t test_vlsseg5e16_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vlsseg5e16_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x5_t test_vlsseg5e16_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vlsseg5e16_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x5_t test_vlsseg5e16_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vlsseg5e16_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x5_t test_vlsseg5e16_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vlsseg5e16_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x5_t test_vlsseg5e16_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vlsseg5e16_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x5_t test_vlsseg5e16_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vlsseg5e16_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x5_t test_vlsseg5e16_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vlsseg5e16_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x5_t test_vlsseg5e16_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vlsseg5e16_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x5_t test_vlsseg5e16_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vlsseg5e16_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x5_t test_vlsseg5e16_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vlsseg5e16_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x5_t test_vlsseg5e16_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vlsseg5e16_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x5_t test_vlsseg5e16_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vlsseg5e16_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x5_t test_vlsseg5e16_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vlsseg5e16_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x5_t test_vlsseg5e16_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vlsseg5e16_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x5_t test_vlsseg5e16_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c index 6dabc54e9..862192a5d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c @@ -6,98 +6,145 @@ #include -vfloat32mf2x5_t test_vlsseg5e32_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x5_t test_vlsseg5e32_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vlsseg5e32_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x5_t test_vlsseg5e32_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e32_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vlsseg5e32_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x5_t test_vlsseg5e32_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vlsseg5e32_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x5_t test_vlsseg5e32_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e32_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vlsseg5e32_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x5_t test_vlsseg5e32_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vlsseg5e32_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x5_t test_vlsseg5e32_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vlsseg5e32_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x5_t test_vlsseg5e32_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vlsseg5e32_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x5_t test_vlsseg5e32_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vlsseg5e32_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x5_t test_vlsseg5e32_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vlsseg5e32_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x5_t test_vlsseg5e32_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vlsseg5e32_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x5_t test_vlsseg5e32_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e32_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vlsseg5e32_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x5_t test_vlsseg5e32_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vlsseg5e32_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x5_t test_vlsseg5e32_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vlsseg5e32_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x5_t test_vlsseg5e32_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vlsseg5e32_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x5_t test_vlsseg5e32_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vlsseg5e32_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x5_t test_vlsseg5e32_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vlsseg5e32_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x5_t test_vlsseg5e32_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e32_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vlsseg5e32_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x5_t test_vlsseg5e32_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vlsseg5e32_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x5_t test_vlsseg5e32_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vlsseg5e32_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x5_t test_vlsseg5e32_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vlsseg5e32_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x5_t test_vlsseg5e32_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vlsseg5e32_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x5_t test_vlsseg5e32_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vlsseg5e32_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x5_t test_vlsseg5e32_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vlsseg5e32_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x5_t test_vlsseg5e32_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e32_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c index f00726ec6..9193a7861 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c @@ -6,50 +6,73 @@ #include -vfloat64m1x5_t test_vlsseg5e64_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x5_t test_vlsseg5e64_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vlsseg5e64_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x5_t test_vlsseg5e64_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e64_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vlsseg5e64_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x5_t test_vlsseg5e64_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vlsseg5e64_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x5_t test_vlsseg5e64_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vlsseg5e64_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x5_t test_vlsseg5e64_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vlsseg5e64_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x5_t test_vlsseg5e64_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vlsseg5e64_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x5_t test_vlsseg5e64_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vlsseg5e64_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x5_t test_vlsseg5e64_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vlsseg5e64_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x5_t test_vlsseg5e64_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vlsseg5e64_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x5_t test_vlsseg5e64_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vlsseg5e64_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x5_t test_vlsseg5e64_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vlsseg5e64_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x5_t test_vlsseg5e64_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e64_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e8.c index 7f44ffec4..4f457fb24 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e8.c @@ -5,130 +5,186 @@ #include -vint8mf8x5_t test_vlsseg5e8_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x5_t test_vlsseg5e8_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e8_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vlsseg5e8_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x5_t test_vlsseg5e8_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e8_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vlsseg5e8_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x5_t test_vlsseg5e8_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e8_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vlsseg5e8_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x5_t test_vlsseg5e8_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e8_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vlsseg5e8_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x5_t test_vlsseg5e8_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e8_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vlsseg5e8_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x5_t test_vlsseg5e8_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e8_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vlsseg5e8_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x5_t test_vlsseg5e8_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e8_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vlsseg5e8_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x5_t test_vlsseg5e8_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e8_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vlsseg5e8_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x5_t test_vlsseg5e8_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vlsseg5e8_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x5_t test_vlsseg5e8_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vlsseg5e8_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x5_t test_vlsseg5e8_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vlsseg5e8_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x5_t test_vlsseg5e8_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vlsseg5e8_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x5_t test_vlsseg5e8_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vlsseg5e8_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x5_t test_vlsseg5e8_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vlsseg5e8_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x5_t test_vlsseg5e8_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vlsseg5e8_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x5_t test_vlsseg5e8_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vlsseg5e8_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x5_t test_vlsseg5e8_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vlsseg5e8_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x5_t test_vlsseg5e8_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vlsseg5e8_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x5_t test_vlsseg5e8_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vlsseg5e8_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x5_t test_vlsseg5e8_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vlsseg5e8_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x5_t test_vlsseg5e8_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vlsseg5e8_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x5_t test_vlsseg5e8_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vlsseg5e8_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x5_t test_vlsseg5e8_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vlsseg5e8_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x5_t test_vlsseg5e8_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vlsseg5e8_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x5_t test_vlsseg5e8_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vlsseg5e8_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x5_t test_vlsseg5e8_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vlsseg5e8_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x5_t test_vlsseg5e8_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vlsseg5e8_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x5_t test_vlsseg5e8_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vlsseg5e8_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x5_t test_vlsseg5e8_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vlsseg5e8_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x5_t test_vlsseg5e8_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vlsseg5e8_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x5_t test_vlsseg5e8_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vlsseg5e8_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x5_t test_vlsseg5e8_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e8_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c index d18a0f3f0..b083be31e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c @@ -6,146 +6,219 @@ #include -vfloat16mf4x6_t test_vlsseg6e16_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x6_t test_vlsseg6e16_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vlsseg6e16_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x6_t test_vlsseg6e16_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vlsseg6e16_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x6_t test_vlsseg6e16_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vlsseg6e16_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x6_t test_vlsseg6e16_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vlsseg6e16_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x6_t test_vlsseg6e16_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vlsseg6e16_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x6_t test_vlsseg6e16_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vlsseg6e16_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x6_t test_vlsseg6e16_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vlsseg6e16_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x6_t test_vlsseg6e16_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vlsseg6e16_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x6_t test_vlsseg6e16_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vlsseg6e16_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x6_t test_vlsseg6e16_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vlsseg6e16_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x6_t test_vlsseg6e16_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vlsseg6e16_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x6_t test_vlsseg6e16_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vlsseg6e16_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x6_t test_vlsseg6e16_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vlsseg6e16_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x6_t test_vlsseg6e16_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vlsseg6e16_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x6_t test_vlsseg6e16_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vlsseg6e16_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x6_t test_vlsseg6e16_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vlsseg6e16_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x6_t test_vlsseg6e16_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vlsseg6e16_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x6_t test_vlsseg6e16_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vlsseg6e16_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x6_t test_vlsseg6e16_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vlsseg6e16_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x6_t test_vlsseg6e16_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vlsseg6e16_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x6_t test_vlsseg6e16_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vlsseg6e16_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x6_t test_vlsseg6e16_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vlsseg6e16_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x6_t test_vlsseg6e16_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vlsseg6e16_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x6_t test_vlsseg6e16_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vlsseg6e16_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x6_t test_vlsseg6e16_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vlsseg6e16_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x6_t test_vlsseg6e16_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vlsseg6e16_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x6_t test_vlsseg6e16_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vlsseg6e16_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x6_t test_vlsseg6e16_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vlsseg6e16_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x6_t test_vlsseg6e16_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vlsseg6e16_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x6_t test_vlsseg6e16_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vlsseg6e16_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x6_t test_vlsseg6e16_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vlsseg6e16_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x6_t test_vlsseg6e16_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vlsseg6e16_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x6_t test_vlsseg6e16_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vlsseg6e16_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x6_t test_vlsseg6e16_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vlsseg6e16_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x6_t test_vlsseg6e16_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vlsseg6e16_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x6_t test_vlsseg6e16_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c index f5f814c05..80af6a712 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c @@ -6,98 +6,145 @@ #include -vfloat32mf2x6_t test_vlsseg6e32_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x6_t test_vlsseg6e32_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vlsseg6e32_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x6_t test_vlsseg6e32_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e32_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vlsseg6e32_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x6_t test_vlsseg6e32_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vlsseg6e32_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x6_t test_vlsseg6e32_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e32_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vlsseg6e32_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x6_t test_vlsseg6e32_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vlsseg6e32_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x6_t test_vlsseg6e32_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vlsseg6e32_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x6_t test_vlsseg6e32_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vlsseg6e32_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x6_t test_vlsseg6e32_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vlsseg6e32_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x6_t test_vlsseg6e32_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vlsseg6e32_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x6_t test_vlsseg6e32_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vlsseg6e32_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x6_t test_vlsseg6e32_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e32_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vlsseg6e32_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x6_t test_vlsseg6e32_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vlsseg6e32_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x6_t test_vlsseg6e32_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vlsseg6e32_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x6_t test_vlsseg6e32_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vlsseg6e32_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x6_t test_vlsseg6e32_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vlsseg6e32_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x6_t test_vlsseg6e32_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vlsseg6e32_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x6_t test_vlsseg6e32_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e32_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vlsseg6e32_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x6_t test_vlsseg6e32_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vlsseg6e32_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x6_t test_vlsseg6e32_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vlsseg6e32_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x6_t test_vlsseg6e32_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vlsseg6e32_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x6_t test_vlsseg6e32_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vlsseg6e32_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x6_t test_vlsseg6e32_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vlsseg6e32_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x6_t test_vlsseg6e32_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vlsseg6e32_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x6_t test_vlsseg6e32_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e32_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c index 9549430d1..3bda96003 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c @@ -6,50 +6,73 @@ #include -vfloat64m1x6_t test_vlsseg6e64_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x6_t test_vlsseg6e64_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vlsseg6e64_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x6_t test_vlsseg6e64_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e64_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vlsseg6e64_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x6_t test_vlsseg6e64_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vlsseg6e64_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x6_t test_vlsseg6e64_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vlsseg6e64_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x6_t test_vlsseg6e64_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vlsseg6e64_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x6_t test_vlsseg6e64_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vlsseg6e64_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x6_t test_vlsseg6e64_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vlsseg6e64_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x6_t test_vlsseg6e64_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vlsseg6e64_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x6_t test_vlsseg6e64_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vlsseg6e64_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x6_t test_vlsseg6e64_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vlsseg6e64_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x6_t test_vlsseg6e64_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vlsseg6e64_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x6_t test_vlsseg6e64_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e64_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e8.c index 9b4a358e4..2ef419479 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e8.c @@ -5,130 +5,186 @@ #include -vint8mf8x6_t test_vlsseg6e8_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x6_t test_vlsseg6e8_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e8_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vlsseg6e8_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x6_t test_vlsseg6e8_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e8_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vlsseg6e8_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x6_t test_vlsseg6e8_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e8_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vlsseg6e8_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x6_t test_vlsseg6e8_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e8_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vlsseg6e8_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x6_t test_vlsseg6e8_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e8_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vlsseg6e8_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x6_t test_vlsseg6e8_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e8_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vlsseg6e8_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x6_t test_vlsseg6e8_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e8_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vlsseg6e8_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x6_t test_vlsseg6e8_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e8_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vlsseg6e8_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x6_t test_vlsseg6e8_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vlsseg6e8_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x6_t test_vlsseg6e8_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vlsseg6e8_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x6_t test_vlsseg6e8_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vlsseg6e8_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x6_t test_vlsseg6e8_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vlsseg6e8_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x6_t test_vlsseg6e8_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vlsseg6e8_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x6_t test_vlsseg6e8_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vlsseg6e8_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x6_t test_vlsseg6e8_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vlsseg6e8_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x6_t test_vlsseg6e8_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vlsseg6e8_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x6_t test_vlsseg6e8_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vlsseg6e8_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x6_t test_vlsseg6e8_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vlsseg6e8_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x6_t test_vlsseg6e8_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vlsseg6e8_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x6_t test_vlsseg6e8_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vlsseg6e8_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x6_t test_vlsseg6e8_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vlsseg6e8_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x6_t test_vlsseg6e8_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vlsseg6e8_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x6_t test_vlsseg6e8_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vlsseg6e8_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x6_t test_vlsseg6e8_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vlsseg6e8_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x6_t test_vlsseg6e8_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vlsseg6e8_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x6_t test_vlsseg6e8_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vlsseg6e8_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x6_t test_vlsseg6e8_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vlsseg6e8_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x6_t test_vlsseg6e8_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vlsseg6e8_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x6_t test_vlsseg6e8_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vlsseg6e8_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x6_t test_vlsseg6e8_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vlsseg6e8_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x6_t test_vlsseg6e8_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vlsseg6e8_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x6_t test_vlsseg6e8_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e8_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c index cf1e9c260..354f87241 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c @@ -6,146 +6,219 @@ #include -vfloat16mf4x7_t test_vlsseg7e16_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x7_t test_vlsseg7e16_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vlsseg7e16_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x7_t test_vlsseg7e16_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vlsseg7e16_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x7_t test_vlsseg7e16_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vlsseg7e16_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x7_t test_vlsseg7e16_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vlsseg7e16_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x7_t test_vlsseg7e16_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vlsseg7e16_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x7_t test_vlsseg7e16_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vlsseg7e16_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x7_t test_vlsseg7e16_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vlsseg7e16_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x7_t test_vlsseg7e16_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vlsseg7e16_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x7_t test_vlsseg7e16_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vlsseg7e16_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x7_t test_vlsseg7e16_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vlsseg7e16_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x7_t test_vlsseg7e16_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vlsseg7e16_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x7_t test_vlsseg7e16_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vlsseg7e16_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x7_t test_vlsseg7e16_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vlsseg7e16_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x7_t test_vlsseg7e16_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vlsseg7e16_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x7_t test_vlsseg7e16_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vlsseg7e16_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x7_t test_vlsseg7e16_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vlsseg7e16_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x7_t test_vlsseg7e16_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vlsseg7e16_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x7_t test_vlsseg7e16_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vlsseg7e16_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x7_t test_vlsseg7e16_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vlsseg7e16_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x7_t test_vlsseg7e16_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vlsseg7e16_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x7_t test_vlsseg7e16_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vlsseg7e16_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x7_t test_vlsseg7e16_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vlsseg7e16_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x7_t test_vlsseg7e16_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vlsseg7e16_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x7_t test_vlsseg7e16_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vlsseg7e16_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x7_t test_vlsseg7e16_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vlsseg7e16_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x7_t test_vlsseg7e16_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vlsseg7e16_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x7_t test_vlsseg7e16_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vlsseg7e16_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x7_t test_vlsseg7e16_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vlsseg7e16_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x7_t test_vlsseg7e16_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vlsseg7e16_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x7_t test_vlsseg7e16_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vlsseg7e16_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x7_t test_vlsseg7e16_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vlsseg7e16_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x7_t test_vlsseg7e16_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vlsseg7e16_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x7_t test_vlsseg7e16_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vlsseg7e16_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x7_t test_vlsseg7e16_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vlsseg7e16_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x7_t test_vlsseg7e16_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vlsseg7e16_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x7_t test_vlsseg7e16_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c index 74c6b4416..dc78b6aae 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c @@ -6,98 +6,145 @@ #include -vfloat32mf2x7_t test_vlsseg7e32_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x7_t test_vlsseg7e32_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vlsseg7e32_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x7_t test_vlsseg7e32_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e32_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vlsseg7e32_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x7_t test_vlsseg7e32_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vlsseg7e32_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x7_t test_vlsseg7e32_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e32_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vlsseg7e32_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x7_t test_vlsseg7e32_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vlsseg7e32_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x7_t test_vlsseg7e32_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vlsseg7e32_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x7_t test_vlsseg7e32_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vlsseg7e32_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x7_t test_vlsseg7e32_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vlsseg7e32_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x7_t test_vlsseg7e32_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vlsseg7e32_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x7_t test_vlsseg7e32_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vlsseg7e32_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x7_t test_vlsseg7e32_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e32_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vlsseg7e32_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x7_t test_vlsseg7e32_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vlsseg7e32_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x7_t test_vlsseg7e32_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vlsseg7e32_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x7_t test_vlsseg7e32_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vlsseg7e32_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x7_t test_vlsseg7e32_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vlsseg7e32_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x7_t test_vlsseg7e32_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vlsseg7e32_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x7_t test_vlsseg7e32_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e32_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vlsseg7e32_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x7_t test_vlsseg7e32_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vlsseg7e32_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x7_t test_vlsseg7e32_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vlsseg7e32_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x7_t test_vlsseg7e32_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vlsseg7e32_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x7_t test_vlsseg7e32_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vlsseg7e32_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x7_t test_vlsseg7e32_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vlsseg7e32_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x7_t test_vlsseg7e32_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vlsseg7e32_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x7_t test_vlsseg7e32_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e32_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c index 913cd8518..c49153e51 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c @@ -6,50 +6,73 @@ #include -vfloat64m1x7_t test_vlsseg7e64_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x7_t test_vlsseg7e64_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vlsseg7e64_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x7_t test_vlsseg7e64_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e64_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vlsseg7e64_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x7_t test_vlsseg7e64_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vlsseg7e64_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x7_t test_vlsseg7e64_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vlsseg7e64_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x7_t test_vlsseg7e64_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vlsseg7e64_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x7_t test_vlsseg7e64_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vlsseg7e64_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x7_t test_vlsseg7e64_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vlsseg7e64_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x7_t test_vlsseg7e64_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vlsseg7e64_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x7_t test_vlsseg7e64_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vlsseg7e64_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x7_t test_vlsseg7e64_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vlsseg7e64_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x7_t test_vlsseg7e64_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vlsseg7e64_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x7_t test_vlsseg7e64_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e64_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e8.c index ff635dcdc..c206ae290 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e8.c @@ -5,130 +5,186 @@ #include -vint8mf8x7_t test_vlsseg7e8_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x7_t test_vlsseg7e8_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e8_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vlsseg7e8_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x7_t test_vlsseg7e8_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e8_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vlsseg7e8_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x7_t test_vlsseg7e8_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e8_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vlsseg7e8_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x7_t test_vlsseg7e8_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e8_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vlsseg7e8_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x7_t test_vlsseg7e8_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e8_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vlsseg7e8_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x7_t test_vlsseg7e8_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e8_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vlsseg7e8_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x7_t test_vlsseg7e8_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e8_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vlsseg7e8_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x7_t test_vlsseg7e8_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e8_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vlsseg7e8_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x7_t test_vlsseg7e8_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vlsseg7e8_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x7_t test_vlsseg7e8_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vlsseg7e8_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x7_t test_vlsseg7e8_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vlsseg7e8_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x7_t test_vlsseg7e8_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vlsseg7e8_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x7_t test_vlsseg7e8_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vlsseg7e8_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x7_t test_vlsseg7e8_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vlsseg7e8_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x7_t test_vlsseg7e8_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vlsseg7e8_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x7_t test_vlsseg7e8_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vlsseg7e8_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x7_t test_vlsseg7e8_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vlsseg7e8_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x7_t test_vlsseg7e8_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vlsseg7e8_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x7_t test_vlsseg7e8_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vlsseg7e8_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x7_t test_vlsseg7e8_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vlsseg7e8_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x7_t test_vlsseg7e8_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vlsseg7e8_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x7_t test_vlsseg7e8_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vlsseg7e8_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x7_t test_vlsseg7e8_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vlsseg7e8_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x7_t test_vlsseg7e8_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vlsseg7e8_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x7_t test_vlsseg7e8_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vlsseg7e8_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x7_t test_vlsseg7e8_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vlsseg7e8_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x7_t test_vlsseg7e8_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vlsseg7e8_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x7_t test_vlsseg7e8_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vlsseg7e8_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x7_t test_vlsseg7e8_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vlsseg7e8_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x7_t test_vlsseg7e8_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vlsseg7e8_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x7_t test_vlsseg7e8_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vlsseg7e8_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x7_t test_vlsseg7e8_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e8_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c index 93402af40..4c224afd9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c @@ -6,146 +6,219 @@ #include -vfloat16mf4x8_t test_vlsseg8e16_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x8_t test_vlsseg8e16_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vlsseg8e16_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x8_t test_vlsseg8e16_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vlsseg8e16_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x8_t test_vlsseg8e16_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vlsseg8e16_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x8_t test_vlsseg8e16_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vlsseg8e16_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x8_t test_vlsseg8e16_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vlsseg8e16_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x8_t test_vlsseg8e16_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vlsseg8e16_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x8_t test_vlsseg8e16_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vlsseg8e16_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x8_t test_vlsseg8e16_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vlsseg8e16_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x8_t test_vlsseg8e16_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vlsseg8e16_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x8_t test_vlsseg8e16_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vlsseg8e16_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x8_t test_vlsseg8e16_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vlsseg8e16_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x8_t test_vlsseg8e16_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vlsseg8e16_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x8_t test_vlsseg8e16_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vlsseg8e16_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x8_t test_vlsseg8e16_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vlsseg8e16_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x8_t test_vlsseg8e16_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vlsseg8e16_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x8_t test_vlsseg8e16_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vlsseg8e16_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x8_t test_vlsseg8e16_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vlsseg8e16_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x8_t test_vlsseg8e16_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vlsseg8e16_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x8_t test_vlsseg8e16_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vlsseg8e16_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x8_t test_vlsseg8e16_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vlsseg8e16_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x8_t test_vlsseg8e16_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vlsseg8e16_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x8_t test_vlsseg8e16_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vlsseg8e16_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x8_t test_vlsseg8e16_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vlsseg8e16_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x8_t test_vlsseg8e16_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vlsseg8e16_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x8_t test_vlsseg8e16_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vlsseg8e16_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x8_t test_vlsseg8e16_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vlsseg8e16_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x8_t test_vlsseg8e16_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vlsseg8e16_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf4x8_t test_vlsseg8e16_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vlsseg8e16_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16mf2x8_t test_vlsseg8e16_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, + const _Float16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vlsseg8e16_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, ptrdiff_t rs2, size_t vl) { +vfloat16m1x8_t test_vlsseg8e16_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vlsseg8e16_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf4x8_t test_vlsseg8e16_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vlsseg8e16_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16mf2x8_t test_vlsseg8e16_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vlsseg8e16_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, ptrdiff_t rs2, size_t vl) { +vint16m1x8_t test_vlsseg8e16_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vlsseg8e16_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf4x8_t test_vlsseg8e16_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vlsseg8e16_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16mf2x8_t test_vlsseg8e16_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vlsseg8e16_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint16m1x8_t test_vlsseg8e16_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c index 869989f1c..26d718bc1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c @@ -6,98 +6,145 @@ #include -vfloat32mf2x8_t test_vlsseg8e32_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x8_t test_vlsseg8e32_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vlsseg8e32_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x8_t test_vlsseg8e32_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e32_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vlsseg8e32_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x8_t test_vlsseg8e32_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vlsseg8e32_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x8_t test_vlsseg8e32_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e32_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vlsseg8e32_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x8_t test_vlsseg8e32_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vlsseg8e32_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x8_t test_vlsseg8e32_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vlsseg8e32_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x8_t test_vlsseg8e32_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vlsseg8e32_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x8_t test_vlsseg8e32_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vlsseg8e32_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x8_t test_vlsseg8e32_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vlsseg8e32_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x8_t test_vlsseg8e32_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vlsseg8e32_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x8_t test_vlsseg8e32_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e32_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vlsseg8e32_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x8_t test_vlsseg8e32_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vlsseg8e32_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x8_t test_vlsseg8e32_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vlsseg8e32_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x8_t test_vlsseg8e32_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vlsseg8e32_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x8_t test_vlsseg8e32_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vlsseg8e32_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x8_t test_vlsseg8e32_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vlsseg8e32_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x8_t test_vlsseg8e32_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e32_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vlsseg8e32_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x8_t test_vlsseg8e32_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vlsseg8e32_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32mf2x8_t test_vlsseg8e32_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vlsseg8e32_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, ptrdiff_t rs2, size_t vl) { +vfloat32m1x8_t test_vlsseg8e32_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vlsseg8e32_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32mf2x8_t test_vlsseg8e32_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vlsseg8e32_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, ptrdiff_t rs2, size_t vl) { +vint32m1x8_t test_vlsseg8e32_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vlsseg8e32_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32mf2x8_t test_vlsseg8e32_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vlsseg8e32_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint32m1x8_t test_vlsseg8e32_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e32_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c index b56e01673..96162fe24 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c @@ -6,50 +6,73 @@ #include -vfloat64m1x8_t test_vlsseg8e64_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x8_t test_vlsseg8e64_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vlsseg8e64_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x8_t test_vlsseg8e64_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e64_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vlsseg8e64_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x8_t test_vlsseg8e64_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vlsseg8e64_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x8_t test_vlsseg8e64_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vlsseg8e64_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x8_t test_vlsseg8e64_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vlsseg8e64_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x8_t test_vlsseg8e64_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vlsseg8e64_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x8_t test_vlsseg8e64_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vlsseg8e64_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x8_t test_vlsseg8e64_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vlsseg8e64_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x8_t test_vlsseg8e64_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vlsseg8e64_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, ptrdiff_t rs2, size_t vl) { +vfloat64m1x8_t test_vlsseg8e64_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vlsseg8e64_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, ptrdiff_t rs2, size_t vl) { +vint64m1x8_t test_vlsseg8e64_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vlsseg8e64_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint64m1x8_t test_vlsseg8e64_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e64_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e8.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e8.c index 6f2e63124..8a833e78d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e8.c @@ -5,130 +5,186 @@ #include -vint8mf8x8_t test_vlsseg8e8_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x8_t test_vlsseg8e8_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e8_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vlsseg8e8_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x8_t test_vlsseg8e8_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e8_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vlsseg8e8_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x8_t test_vlsseg8e8_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e8_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vlsseg8e8_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x8_t test_vlsseg8e8_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e8_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vlsseg8e8_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x8_t test_vlsseg8e8_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e8_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vlsseg8e8_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x8_t test_vlsseg8e8_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e8_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vlsseg8e8_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x8_t test_vlsseg8e8_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e8_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vlsseg8e8_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x8_t test_vlsseg8e8_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e8_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vlsseg8e8_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x8_t test_vlsseg8e8_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vlsseg8e8_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x8_t test_vlsseg8e8_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vlsseg8e8_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x8_t test_vlsseg8e8_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vlsseg8e8_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x8_t test_vlsseg8e8_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vlsseg8e8_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x8_t test_vlsseg8e8_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vlsseg8e8_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x8_t test_vlsseg8e8_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vlsseg8e8_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x8_t test_vlsseg8e8_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vlsseg8e8_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x8_t test_vlsseg8e8_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vlsseg8e8_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x8_t test_vlsseg8e8_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vlsseg8e8_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x8_t test_vlsseg8e8_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vlsseg8e8_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x8_t test_vlsseg8e8_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vlsseg8e8_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x8_t test_vlsseg8e8_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vlsseg8e8_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x8_t test_vlsseg8e8_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vlsseg8e8_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x8_t test_vlsseg8e8_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vlsseg8e8_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x8_t test_vlsseg8e8_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vlsseg8e8_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x8_t test_vlsseg8e8_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vlsseg8e8_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf8x8_t test_vlsseg8e8_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vlsseg8e8_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf4x8_t test_vlsseg8e8_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vlsseg8e8_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8mf2x8_t test_vlsseg8e8_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vlsseg8e8_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, ptrdiff_t rs2, size_t vl) { +vint8m1x8_t test_vlsseg8e8_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vlsseg8e8_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf8x8_t test_vlsseg8e8_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vlsseg8e8_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf4x8_t test_vlsseg8e8_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vlsseg8e8_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8mf2x8_t test_vlsseg8e8_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vlsseg8e8_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, ptrdiff_t rs2, size_t vl) { +vuint8m1x8_t test_vlsseg8e8_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e8_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxei16.c index 5c345894f..5b1122b4e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxei16.c @@ -6,914 +6,1307 @@ #include -vfloat16mf4_t test_vluxei16_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei16_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei16_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei16_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei16_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1_t test_vluxei16_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei16_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2_t test_vluxei16_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei16_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4_t test_vluxei16_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_f16m4_tu(vd, rs1, rs2, vl); } -vfloat16m8_t test_vluxei16_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, vuint16m8_t rs2, size_t vl) { +vfloat16m8_t test_vluxei16_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxei16_v_f16m8_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei16_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei16_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei16_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1_t test_vluxei16_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei16_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2_t test_vluxei16_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei16_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4_t test_vluxei16_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei16_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, vuint16m4_t rs2, size_t vl) { +vfloat32m8_t test_vluxei16_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_f32m8_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei16_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1_t test_vluxei16_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei16_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2_t test_vluxei16_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei16_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4_t test_vluxei16_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei16_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, vuint16m2_t rs2, size_t vl) { +vfloat64m8_t test_vluxei16_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_f64m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei16_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8_t test_vluxei16_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei16_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4_t test_vluxei16_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei16_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2_t test_vluxei16_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vluxei16_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1_t test_vluxei16_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m1_tu(vd, rs1, rs2, vl); } -vint8m2_t test_vluxei16_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2_t test_vluxei16_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m2_tu(vd, rs1, rs2, vl); } -vint8m4_t test_vluxei16_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4_t test_vluxei16_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m4_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei16_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4_t test_vluxei16_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei16_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2_t test_vluxei16_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vluxei16_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1_t test_vluxei16_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vluxei16_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2_t test_vluxei16_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_i16m2_tu(vd, rs1, rs2, vl); } -vint16m4_t test_vluxei16_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4_t test_vluxei16_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_i16m4_tu(vd, rs1, rs2, vl); } -vint16m8_t test_vluxei16_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, vuint16m8_t rs2, size_t vl) { +vint16m8_t test_vluxei16_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxei16_v_i16m8_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei16_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2_t test_vluxei16_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vluxei16_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1_t test_vluxei16_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vluxei16_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2_t test_vluxei16_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vluxei16_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4_t test_vluxei16_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_i32m4_tu(vd, rs1, rs2, vl); } -vint32m8_t test_vluxei16_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, vuint16m4_t rs2, size_t vl) { +vint32m8_t test_vluxei16_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_i32m8_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vluxei16_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1_t test_vluxei16_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vluxei16_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2_t test_vluxei16_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vluxei16_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4_t test_vluxei16_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vluxei16_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, vuint16m2_t rs2, size_t vl) { +vint64m8_t test_vluxei16_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei16_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8_t test_vluxei16_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei16_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4_t test_vluxei16_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei16_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2_t test_vluxei16_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei16_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1_t test_vluxei16_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei16_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2_t test_vluxei16_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_u8m2_tu(vd, rs1, rs2, vl); } -vuint8m4_t test_vluxei16_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4_t test_vluxei16_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxei16_v_u8m4_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei16_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4_t test_vluxei16_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei16_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2_t test_vluxei16_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei16_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1_t test_vluxei16_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei16_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2_t test_vluxei16_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei16_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4_t test_vluxei16_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_u16m4_tu(vd, rs1, rs2, vl); } -vuint16m8_t test_vluxei16_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint16m8_t test_vluxei16_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxei16_v_u16m8_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei16_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2_t test_vluxei16_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei16_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1_t test_vluxei16_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei16_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2_t test_vluxei16_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei16_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4_t test_vluxei16_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei16_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint32m8_t test_vluxei16_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_u32m8_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei16_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1_t test_vluxei16_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei16_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2_t test_vluxei16_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei16_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4_t test_vluxei16_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei16_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint64m8_t test_vluxei16_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei16_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei16_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei16_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei16_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei16_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1_t test_vluxei16_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei16_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2_t test_vluxei16_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei16_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4_t test_vluxei16_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vluxei16_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint16m8_t rs2, size_t vl) { +vfloat16m8_t test_vluxei16_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei16_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei16_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei16_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1_t test_vluxei16_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei16_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2_t test_vluxei16_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei16_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4_t test_vluxei16_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei16_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint16m4_t rs2, size_t vl) { +vfloat32m8_t test_vluxei16_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei16_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1_t test_vluxei16_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei16_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2_t test_vluxei16_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei16_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4_t test_vluxei16_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei16_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint16m2_t rs2, size_t vl) { +vfloat64m8_t test_vluxei16_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei16_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8_t test_vluxei16_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei16_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4_t test_vluxei16_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei16_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2_t test_vluxei16_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei16_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1_t test_vluxei16_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei16_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2_t test_vluxei16_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m2_tum(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vluxei16_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4_t test_vluxei16_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei16_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4_t test_vluxei16_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei16_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2_t test_vluxei16_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei16_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1_t test_vluxei16_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei16_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2_t test_vluxei16_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei16_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4_t test_vluxei16_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m4_tum(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vluxei16_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint16m8_t rs2, size_t vl) { +vint16m8_t test_vluxei16_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei16_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2_t test_vluxei16_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei16_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1_t test_vluxei16_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei16_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2_t test_vluxei16_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei16_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4_t test_vluxei16_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei16_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint16m4_t rs2, size_t vl) { +vint32m8_t test_vluxei16_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m8_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei16_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1_t test_vluxei16_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei16_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2_t test_vluxei16_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei16_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4_t test_vluxei16_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei16_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint16m2_t rs2, size_t vl) { +vint64m8_t test_vluxei16_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei16_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8_t test_vluxei16_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei16_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4_t test_vluxei16_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei16_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2_t test_vluxei16_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei16_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1_t test_vluxei16_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei16_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2_t test_vluxei16_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vluxei16_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4_t test_vluxei16_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei16_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4_t test_vluxei16_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei16_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2_t test_vluxei16_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei16_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1_t test_vluxei16_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei16_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2_t test_vluxei16_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei16_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4_t test_vluxei16_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m4_tum(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vluxei16_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint16m8_t test_vluxei16_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei16_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2_t test_vluxei16_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei16_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1_t test_vluxei16_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei16_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2_t test_vluxei16_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei16_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4_t test_vluxei16_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei16_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint32m8_t test_vluxei16_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei16_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1_t test_vluxei16_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei16_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2_t test_vluxei16_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei16_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4_t test_vluxei16_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei16_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint64m8_t test_vluxei16_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei16_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei16_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei16_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei16_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei16_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1_t test_vluxei16_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei16_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2_t test_vluxei16_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei16_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4_t test_vluxei16_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vluxei16_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint16m8_t rs2, size_t vl) { +vfloat16m8_t test_vluxei16_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei16_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei16_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei16_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1_t test_vluxei16_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei16_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2_t test_vluxei16_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei16_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4_t test_vluxei16_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei16_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint16m4_t rs2, size_t vl) { +vfloat32m8_t test_vluxei16_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei16_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1_t test_vluxei16_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei16_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2_t test_vluxei16_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei16_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4_t test_vluxei16_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei16_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint16m2_t rs2, size_t vl) { +vfloat64m8_t test_vluxei16_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei16_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8_t test_vluxei16_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei16_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4_t test_vluxei16_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei16_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2_t test_vluxei16_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei16_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1_t test_vluxei16_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei16_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2_t test_vluxei16_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8m2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vluxei16_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4_t test_vluxei16_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, + const int8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8m4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei16_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4_t test_vluxei16_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei16_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2_t test_vluxei16_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei16_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1_t test_vluxei16_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei16_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2_t test_vluxei16_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei16_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4_t test_vluxei16_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m4_tumu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vluxei16_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint16m8_t rs2, size_t vl) { +vint16m8_t test_vluxei16_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei16_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2_t test_vluxei16_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei16_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1_t test_vluxei16_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei16_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2_t test_vluxei16_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei16_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4_t test_vluxei16_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei16_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint16m4_t rs2, size_t vl) { +vint32m8_t test_vluxei16_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei16_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1_t test_vluxei16_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei16_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2_t test_vluxei16_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei16_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4_t test_vluxei16_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei16_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint16m2_t rs2, size_t vl) { +vint64m8_t test_vluxei16_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei16_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8_t test_vluxei16_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei16_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4_t test_vluxei16_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei16_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2_t test_vluxei16_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei16_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1_t test_vluxei16_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei16_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2_t test_vluxei16_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vluxei16_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4_t test_vluxei16_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei16_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4_t test_vluxei16_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei16_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2_t test_vluxei16_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei16_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1_t test_vluxei16_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei16_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2_t test_vluxei16_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei16_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4_t test_vluxei16_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vluxei16_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint16m8_t test_vluxei16_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei16_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2_t test_vluxei16_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei16_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1_t test_vluxei16_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei16_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2_t test_vluxei16_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei16_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4_t test_vluxei16_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei16_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint32m8_t test_vluxei16_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei16_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1_t test_vluxei16_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei16_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2_t test_vluxei16_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei16_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4_t test_vluxei16_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei16_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint64m8_t test_vluxei16_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei16_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei16_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei16_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei16_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei16_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1_t test_vluxei16_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei16_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2_t test_vluxei16_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei16_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4_t test_vluxei16_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vluxei16_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint16m8_t rs2, size_t vl) { +vfloat16m8_t test_vluxei16_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_f16m8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei16_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei16_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei16_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1_t test_vluxei16_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei16_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2_t test_vluxei16_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei16_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4_t test_vluxei16_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei16_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint16m4_t rs2, size_t vl) { +vfloat32m8_t test_vluxei16_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f32m8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei16_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1_t test_vluxei16_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei16_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2_t test_vluxei16_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei16_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4_t test_vluxei16_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei16_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint16m2_t rs2, size_t vl) { +vfloat64m8_t test_vluxei16_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei16_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8_t test_vluxei16_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei16_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4_t test_vluxei16_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei16_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2_t test_vluxei16_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei16_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1_t test_vluxei16_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei16_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2_t test_vluxei16_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m2_mu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vluxei16_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4_t test_vluxei16_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxei16_v_i8m4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei16_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4_t test_vluxei16_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei16_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2_t test_vluxei16_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei16_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1_t test_vluxei16_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei16_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2_t test_vluxei16_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei16_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4_t test_vluxei16_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m4_mu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vluxei16_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint16m8_t rs2, size_t vl) { +vint16m8_t test_vluxei16_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_i16m8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei16_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2_t test_vluxei16_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei16_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1_t test_vluxei16_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei16_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2_t test_vluxei16_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei16_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4_t test_vluxei16_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei16_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint16m4_t rs2, size_t vl) { +vint32m8_t test_vluxei16_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i32m8_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei16_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1_t test_vluxei16_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei16_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2_t test_vluxei16_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei16_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4_t test_vluxei16_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei16_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint16m2_t rs2, size_t vl) { +vint64m8_t test_vluxei16_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei16_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8_t test_vluxei16_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei16_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4_t test_vluxei16_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei16_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2_t test_vluxei16_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei16_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1_t test_vluxei16_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei16_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2_t test_vluxei16_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vluxei16_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4_t test_vluxei16_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_u8m4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei16_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4_t test_vluxei16_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei16_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2_t test_vluxei16_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei16_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1_t test_vluxei16_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei16_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2_t test_vluxei16_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei16_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4_t test_vluxei16_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m4_mu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vluxei16_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint16m8_t test_vluxei16_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_u16m8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei16_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2_t test_vluxei16_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei16_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1_t test_vluxei16_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei16_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2_t test_vluxei16_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei16_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4_t test_vluxei16_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei16_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint32m8_t test_vluxei16_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u32m8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei16_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1_t test_vluxei16_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei16_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2_t test_vluxei16_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei16_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4_t test_vluxei16_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei16_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint64m8_t test_vluxei16_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxei32.c index a89e67483..04ce6c287 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxei32.c @@ -6,834 +6,1194 @@ #include -vfloat16mf4_t test_vluxei32_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei32_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei32_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei32_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei32_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1_t test_vluxei32_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei32_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2_t test_vluxei32_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei32_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4_t test_vluxei32_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_f16m4_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei32_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei32_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei32_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1_t test_vluxei32_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei32_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2_t test_vluxei32_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei32_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4_t test_vluxei32_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei32_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, vuint32m8_t rs2, size_t vl) { +vfloat32m8_t test_vluxei32_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_f32m8_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei32_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1_t test_vluxei32_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei32_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2_t test_vluxei32_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei32_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4_t test_vluxei32_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei32_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, vuint32m4_t rs2, size_t vl) { +vfloat64m8_t test_vluxei32_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_f64m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei32_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8_t test_vluxei32_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei32_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4_t test_vluxei32_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei32_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2_t test_vluxei32_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vluxei32_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1_t test_vluxei32_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_i8m1_tu(vd, rs1, rs2, vl); } -vint8m2_t test_vluxei32_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2_t test_vluxei32_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_i8m2_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei32_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4_t test_vluxei32_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei32_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2_t test_vluxei32_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vluxei32_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1_t test_vluxei32_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vluxei32_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2_t test_vluxei32_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_i16m2_tu(vd, rs1, rs2, vl); } -vint16m4_t test_vluxei32_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4_t test_vluxei32_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_i16m4_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei32_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2_t test_vluxei32_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vluxei32_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1_t test_vluxei32_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vluxei32_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2_t test_vluxei32_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vluxei32_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4_t test_vluxei32_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_i32m4_tu(vd, rs1, rs2, vl); } -vint32m8_t test_vluxei32_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, vuint32m8_t rs2, size_t vl) { +vint32m8_t test_vluxei32_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_i32m8_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vluxei32_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1_t test_vluxei32_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vluxei32_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2_t test_vluxei32_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vluxei32_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4_t test_vluxei32_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vluxei32_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, vuint32m4_t rs2, size_t vl) { +vint64m8_t test_vluxei32_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei32_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8_t test_vluxei32_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei32_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4_t test_vluxei32_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei32_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2_t test_vluxei32_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei32_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1_t test_vluxei32_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei32_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2_t test_vluxei32_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_u8m2_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei32_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4_t test_vluxei32_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei32_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2_t test_vluxei32_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei32_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1_t test_vluxei32_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei32_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2_t test_vluxei32_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei32_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4_t test_vluxei32_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_u16m4_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei32_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2_t test_vluxei32_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei32_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1_t test_vluxei32_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei32_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2_t test_vluxei32_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei32_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4_t test_vluxei32_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei32_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint32m8_t test_vluxei32_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_u32m8_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei32_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1_t test_vluxei32_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxei32_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei32_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2_t test_vluxei32_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxei32_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei32_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4_t test_vluxei32_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxei32_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei32_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint64m8_t test_vluxei32_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei32_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei32_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei32_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei32_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei32_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1_t test_vluxei32_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei32_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2_t test_vluxei32_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei32_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4_t test_vluxei32_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei32_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei32_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei32_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1_t test_vluxei32_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei32_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2_t test_vluxei32_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei32_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4_t test_vluxei32_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei32_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint32m8_t rs2, size_t vl) { +vfloat32m8_t test_vluxei32_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei32_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1_t test_vluxei32_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei32_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2_t test_vluxei32_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei32_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4_t test_vluxei32_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei32_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint32m4_t rs2, size_t vl) { +vfloat64m8_t test_vluxei32_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei32_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8_t test_vluxei32_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei32_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4_t test_vluxei32_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei32_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2_t test_vluxei32_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei32_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1_t test_vluxei32_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei32_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2_t test_vluxei32_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_i8m2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei32_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4_t test_vluxei32_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei32_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2_t test_vluxei32_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei32_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1_t test_vluxei32_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei32_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2_t test_vluxei32_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei32_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4_t test_vluxei32_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei32_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2_t test_vluxei32_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei32_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1_t test_vluxei32_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei32_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2_t test_vluxei32_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei32_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4_t test_vluxei32_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei32_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint32m8_t rs2, size_t vl) { +vint32m8_t test_vluxei32_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m8_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei32_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1_t test_vluxei32_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei32_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2_t test_vluxei32_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei32_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4_t test_vluxei32_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei32_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint32m4_t rs2, size_t vl) { +vint64m8_t test_vluxei32_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei32_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8_t test_vluxei32_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei32_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4_t test_vluxei32_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei32_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2_t test_vluxei32_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei32_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1_t test_vluxei32_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei32_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2_t test_vluxei32_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8m2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei32_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4_t test_vluxei32_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei32_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2_t test_vluxei32_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei32_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1_t test_vluxei32_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei32_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2_t test_vluxei32_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei32_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4_t test_vluxei32_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei32_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2_t test_vluxei32_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei32_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1_t test_vluxei32_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei32_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2_t test_vluxei32_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei32_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4_t test_vluxei32_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei32_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint32m8_t test_vluxei32_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei32_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1_t test_vluxei32_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei32_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2_t test_vluxei32_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei32_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4_t test_vluxei32_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei32_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint64m8_t test_vluxei32_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei32_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei32_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei32_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei32_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei32_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1_t test_vluxei32_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei32_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2_t test_vluxei32_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei32_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4_t test_vluxei32_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei32_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei32_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei32_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1_t test_vluxei32_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei32_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2_t test_vluxei32_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei32_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4_t test_vluxei32_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei32_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint32m8_t rs2, size_t vl) { +vfloat32m8_t test_vluxei32_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei32_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1_t test_vluxei32_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei32_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2_t test_vluxei32_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei32_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4_t test_vluxei32_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei32_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint32m4_t rs2, size_t vl) { +vfloat64m8_t test_vluxei32_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei32_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8_t test_vluxei32_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei32_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4_t test_vluxei32_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei32_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2_t test_vluxei32_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei32_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1_t test_vluxei32_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei32_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2_t test_vluxei32_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8m2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei32_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4_t test_vluxei32_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei32_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2_t test_vluxei32_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei32_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1_t test_vluxei32_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei32_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2_t test_vluxei32_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei32_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4_t test_vluxei32_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei32_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2_t test_vluxei32_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei32_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1_t test_vluxei32_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei32_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2_t test_vluxei32_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei32_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4_t test_vluxei32_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei32_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint32m8_t rs2, size_t vl) { +vint32m8_t test_vluxei32_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei32_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1_t test_vluxei32_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei32_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2_t test_vluxei32_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei32_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4_t test_vluxei32_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei32_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint32m4_t rs2, size_t vl) { +vint64m8_t test_vluxei32_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei32_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8_t test_vluxei32_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei32_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4_t test_vluxei32_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei32_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2_t test_vluxei32_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei32_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1_t test_vluxei32_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei32_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2_t test_vluxei32_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei32_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4_t test_vluxei32_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei32_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2_t test_vluxei32_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei32_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1_t test_vluxei32_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei32_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2_t test_vluxei32_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei32_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4_t test_vluxei32_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei32_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2_t test_vluxei32_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei32_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1_t test_vluxei32_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei32_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2_t test_vluxei32_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei32_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4_t test_vluxei32_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei32_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint32m8_t test_vluxei32_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei32_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1_t test_vluxei32_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei32_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2_t test_vluxei32_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei32_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4_t test_vluxei32_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei32_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint64m8_t test_vluxei32_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei32_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei32_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei32_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei32_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei32_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1_t test_vluxei32_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei32_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2_t test_vluxei32_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei32_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4_t test_vluxei32_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_f16m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei32_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei32_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei32_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1_t test_vluxei32_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei32_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2_t test_vluxei32_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei32_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4_t test_vluxei32_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei32_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint32m8_t rs2, size_t vl) { +vfloat32m8_t test_vluxei32_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_f32m8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei32_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1_t test_vluxei32_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei32_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2_t test_vluxei32_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei32_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4_t test_vluxei32_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei32_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint32m4_t rs2, size_t vl) { +vfloat64m8_t test_vluxei32_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei32_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8_t test_vluxei32_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei32_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4_t test_vluxei32_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei32_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2_t test_vluxei32_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei32_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1_t test_vluxei32_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxei32_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei32_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2_t test_vluxei32_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxei32_v_i8m2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei32_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4_t test_vluxei32_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei32_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2_t test_vluxei32_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei32_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1_t test_vluxei32_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei32_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2_t test_vluxei32_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei32_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4_t test_vluxei32_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_i16m4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei32_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2_t test_vluxei32_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei32_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1_t test_vluxei32_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei32_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2_t test_vluxei32_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei32_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4_t test_vluxei32_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei32_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint32m8_t rs2, size_t vl) { +vint32m8_t test_vluxei32_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_i32m8_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei32_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1_t test_vluxei32_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei32_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2_t test_vluxei32_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei32_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4_t test_vluxei32_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei32_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint32m4_t rs2, size_t vl) { +vint64m8_t test_vluxei32_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei32_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8_t test_vluxei32_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei32_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4_t test_vluxei32_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei32_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2_t test_vluxei32_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei32_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1_t test_vluxei32_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei32_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2_t test_vluxei32_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u8m2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei32_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4_t test_vluxei32_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei32_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2_t test_vluxei32_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei32_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1_t test_vluxei32_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei32_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2_t test_vluxei32_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei32_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4_t test_vluxei32_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u16m4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei32_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2_t test_vluxei32_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei32_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1_t test_vluxei32_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei32_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2_t test_vluxei32_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei32_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4_t test_vluxei32_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei32_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint32m8_t test_vluxei32_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxei32_v_u32m8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei32_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1_t test_vluxei32_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei32_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2_t test_vluxei32_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei32_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4_t test_vluxei32_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei32_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint64m8_t test_vluxei32_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxei32_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxei64.c index 612861828..85491fc0e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxei64.c @@ -6,706 +6,1012 @@ #include -vfloat16mf4_t test_vluxei64_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei64_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei64_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei64_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei64_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1_t test_vluxei64_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei64_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2_t test_vluxei64_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei64_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei64_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei64_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1_t test_vluxei64_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei64_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2_t test_vluxei64_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei64_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4_t test_vluxei64_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei64_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1_t test_vluxei64_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei64_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2_t test_vluxei64_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei64_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4_t test_vluxei64_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei64_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, vuint64m8_t rs2, size_t vl) { +vfloat64m8_t test_vluxei64_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_f64m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei64_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8_t test_vluxei64_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei64_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4_t test_vluxei64_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei64_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2_t test_vluxei64_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vluxei64_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1_t test_vluxei64_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_i8m1_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei64_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4_t test_vluxei64_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei64_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2_t test_vluxei64_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vluxei64_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1_t test_vluxei64_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vluxei64_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2_t test_vluxei64_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_i16m2_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei64_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2_t test_vluxei64_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vluxei64_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1_t test_vluxei64_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vluxei64_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2_t test_vluxei64_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vluxei64_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4_t test_vluxei64_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_i32m4_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vluxei64_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1_t test_vluxei64_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vluxei64_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2_t test_vluxei64_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vluxei64_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4_t test_vluxei64_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vluxei64_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, vuint64m8_t rs2, size_t vl) { +vint64m8_t test_vluxei64_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei64_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8_t test_vluxei64_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei64_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4_t test_vluxei64_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei64_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2_t test_vluxei64_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei64_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1_t test_vluxei64_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei64_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4_t test_vluxei64_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei64_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2_t test_vluxei64_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei64_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1_t test_vluxei64_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei64_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2_t test_vluxei64_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei64_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2_t test_vluxei64_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei64_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1_t test_vluxei64_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei64_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2_t test_vluxei64_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei64_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4_t test_vluxei64_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei64_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1_t test_vluxei64_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxei64_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei64_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2_t test_vluxei64_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxei64_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei64_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4_t test_vluxei64_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxei64_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei64_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint64m8_t test_vluxei64_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei64_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei64_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei64_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei64_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei64_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1_t test_vluxei64_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei64_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2_t test_vluxei64_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei64_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei64_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei64_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1_t test_vluxei64_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei64_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2_t test_vluxei64_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei64_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4_t test_vluxei64_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei64_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1_t test_vluxei64_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei64_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2_t test_vluxei64_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei64_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4_t test_vluxei64_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei64_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint64m8_t rs2, size_t vl) { +vfloat64m8_t test_vluxei64_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei64_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8_t test_vluxei64_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei64_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4_t test_vluxei64_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei64_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2_t test_vluxei64_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei64_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1_t test_vluxei64_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei64_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4_t test_vluxei64_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei64_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2_t test_vluxei64_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei64_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1_t test_vluxei64_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei64_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2_t test_vluxei64_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei64_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2_t test_vluxei64_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei64_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1_t test_vluxei64_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei64_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2_t test_vluxei64_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei64_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4_t test_vluxei64_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei64_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1_t test_vluxei64_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei64_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2_t test_vluxei64_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei64_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4_t test_vluxei64_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei64_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint64m8_t rs2, size_t vl) { +vint64m8_t test_vluxei64_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei64_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8_t test_vluxei64_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei64_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4_t test_vluxei64_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei64_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2_t test_vluxei64_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei64_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1_t test_vluxei64_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei64_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4_t test_vluxei64_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei64_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2_t test_vluxei64_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei64_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1_t test_vluxei64_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei64_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2_t test_vluxei64_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei64_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2_t test_vluxei64_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei64_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1_t test_vluxei64_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei64_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2_t test_vluxei64_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei64_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4_t test_vluxei64_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei64_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1_t test_vluxei64_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei64_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2_t test_vluxei64_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei64_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4_t test_vluxei64_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei64_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint64m8_t test_vluxei64_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei64_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei64_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei64_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei64_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei64_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1_t test_vluxei64_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei64_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2_t test_vluxei64_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei64_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei64_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei64_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1_t test_vluxei64_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei64_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2_t test_vluxei64_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei64_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4_t test_vluxei64_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei64_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1_t test_vluxei64_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei64_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2_t test_vluxei64_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei64_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4_t test_vluxei64_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei64_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint64m8_t rs2, size_t vl) { +vfloat64m8_t test_vluxei64_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei64_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8_t test_vluxei64_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei64_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4_t test_vluxei64_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei64_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2_t test_vluxei64_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei64_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1_t test_vluxei64_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei64_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4_t test_vluxei64_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei64_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2_t test_vluxei64_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei64_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1_t test_vluxei64_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei64_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2_t test_vluxei64_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei64_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2_t test_vluxei64_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei64_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1_t test_vluxei64_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei64_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2_t test_vluxei64_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei64_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4_t test_vluxei64_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei64_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1_t test_vluxei64_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei64_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2_t test_vluxei64_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei64_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4_t test_vluxei64_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei64_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint64m8_t rs2, size_t vl) { +vint64m8_t test_vluxei64_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei64_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8_t test_vluxei64_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei64_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4_t test_vluxei64_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei64_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2_t test_vluxei64_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei64_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1_t test_vluxei64_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei64_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4_t test_vluxei64_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei64_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2_t test_vluxei64_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei64_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1_t test_vluxei64_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei64_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2_t test_vluxei64_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei64_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2_t test_vluxei64_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei64_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1_t test_vluxei64_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei64_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2_t test_vluxei64_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei64_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4_t test_vluxei64_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei64_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1_t test_vluxei64_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei64_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2_t test_vluxei64_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei64_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4_t test_vluxei64_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei64_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint64m8_t test_vluxei64_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei64_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei64_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei64_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei64_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei64_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1_t test_vluxei64_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei64_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2_t test_vluxei64_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei64_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei64_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei64_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1_t test_vluxei64_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei64_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2_t test_vluxei64_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei64_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4_t test_vluxei64_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei64_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1_t test_vluxei64_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei64_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2_t test_vluxei64_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei64_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4_t test_vluxei64_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei64_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint64m8_t rs2, size_t vl) { +vfloat64m8_t test_vluxei64_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei64_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8_t test_vluxei64_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei64_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4_t test_vluxei64_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei64_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2_t test_vluxei64_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei64_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1_t test_vluxei64_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxei64_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei64_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4_t test_vluxei64_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei64_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2_t test_vluxei64_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei64_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1_t test_vluxei64_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei64_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2_t test_vluxei64_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei64_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2_t test_vluxei64_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei64_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1_t test_vluxei64_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei64_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2_t test_vluxei64_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei64_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4_t test_vluxei64_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei64_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1_t test_vluxei64_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei64_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2_t test_vluxei64_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei64_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4_t test_vluxei64_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei64_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint64m8_t rs2, size_t vl) { +vint64m8_t test_vluxei64_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei64_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8_t test_vluxei64_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei64_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4_t test_vluxei64_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei64_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2_t test_vluxei64_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei64_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1_t test_vluxei64_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei64_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4_t test_vluxei64_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei64_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2_t test_vluxei64_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei64_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1_t test_vluxei64_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei64_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2_t test_vluxei64_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei64_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2_t test_vluxei64_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei64_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1_t test_vluxei64_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei64_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2_t test_vluxei64_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei64_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4_t test_vluxei64_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei64_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1_t test_vluxei64_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei64_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2_t test_vluxei64_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei64_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4_t test_vluxei64_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei64_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint64m8_t test_vluxei64_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxei64_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxei8.c index 9cc4c0958..2ee6f69fa 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxei8.c @@ -6,946 +6,1347 @@ #include -vfloat16mf4_t test_vluxei8_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei8_v_f16mf4_tu(vfloat16mf4_t vd, const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_f16mf4_tu(vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei8_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei8_v_f16mf2_tu(vfloat16mf2_t vd, const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_f16mf2_tu(vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei8_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1_t test_vluxei8_v_f16m1_tu(vfloat16m1_t vd, const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_f16m1_tu(vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei8_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2_t test_vluxei8_v_f16m2_tu(vfloat16m2_t vd, const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_f16m2_tu(vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei8_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4_t test_vluxei8_v_f16m4_tu(vfloat16m4_t vd, const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_f16m4_tu(vd, rs1, rs2, vl); } -vfloat16m8_t test_vluxei8_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, vuint8m4_t rs2, size_t vl) { +vfloat16m8_t test_vluxei8_v_f16m8_tu(vfloat16m8_t vd, const _Float16 *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxei8_v_f16m8_tu(vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei8_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei8_v_f32mf2_tu(vfloat32mf2_t vd, const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_f32mf2_tu(vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei8_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1_t test_vluxei8_v_f32m1_tu(vfloat32m1_t vd, const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_f32m1_tu(vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei8_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2_t test_vluxei8_v_f32m2_tu(vfloat32m2_t vd, const float *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_f32m2_tu(vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei8_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4_t test_vluxei8_v_f32m4_tu(vfloat32m4_t vd, const float *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_f32m4_tu(vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei8_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, vuint8m2_t rs2, size_t vl) { +vfloat32m8_t test_vluxei8_v_f32m8_tu(vfloat32m8_t vd, const float *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_f32m8_tu(vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei8_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1_t test_vluxei8_v_f64m1_tu(vfloat64m1_t vd, const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_f64m1_tu(vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei8_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2_t test_vluxei8_v_f64m2_tu(vfloat64m2_t vd, const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_f64m2_tu(vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei8_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4_t test_vluxei8_v_f64m4_tu(vfloat64m4_t vd, const double *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_f64m4_tu(vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei8_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, vuint8m1_t rs2, size_t vl) { +vfloat64m8_t test_vluxei8_v_f64m8_tu(vfloat64m8_t vd, const double *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_f64m8_tu(vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei8_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8_t test_vluxei8_v_i8mf8_tu(vint8mf8_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_i8mf8_tu(vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei8_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4_t test_vluxei8_v_i8mf4_tu(vint8mf4_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_i8mf4_tu(vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei8_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2_t test_vluxei8_v_i8mf2_tu(vint8mf2_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_i8mf2_tu(vd, rs1, rs2, vl); } -vint8m1_t test_vluxei8_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1_t test_vluxei8_v_i8m1_tu(vint8m1_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m1_tu(vd, rs1, rs2, vl); } -vint8m2_t test_vluxei8_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2_t test_vluxei8_v_i8m2_tu(vint8m2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m2_tu(vd, rs1, rs2, vl); } -vint8m4_t test_vluxei8_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4_t test_vluxei8_v_i8m4_tu(vint8m4_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m4_tu(vd, rs1, rs2, vl); } -vint8m8_t test_vluxei8_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, vuint8m8_t rs2, size_t vl) { +vint8m8_t test_vluxei8_v_i8m8_tu(vint8m8_t vd, const int8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m8_tu(vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei8_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4_t test_vluxei8_v_i16mf4_tu(vint16mf4_t vd, const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_i16mf4_tu(vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei8_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2_t test_vluxei8_v_i16mf2_tu(vint16mf2_t vd, const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_i16mf2_tu(vd, rs1, rs2, vl); } -vint16m1_t test_vluxei8_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1_t test_vluxei8_v_i16m1_tu(vint16m1_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_i16m1_tu(vd, rs1, rs2, vl); } -vint16m2_t test_vluxei8_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2_t test_vluxei8_v_i16m2_tu(vint16m2_t vd, const int16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_i16m2_tu(vd, rs1, rs2, vl); } -vint16m4_t test_vluxei8_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4_t test_vluxei8_v_i16m4_tu(vint16m4_t vd, const int16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_i16m4_tu(vd, rs1, rs2, vl); } -vint16m8_t test_vluxei8_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, vuint8m4_t rs2, size_t vl) { +vint16m8_t test_vluxei8_v_i16m8_tu(vint16m8_t vd, const int16_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxei8_v_i16m8_tu(vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei8_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2_t test_vluxei8_v_i32mf2_tu(vint32mf2_t vd, const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_i32mf2_tu(vd, rs1, rs2, vl); } -vint32m1_t test_vluxei8_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1_t test_vluxei8_v_i32m1_tu(vint32m1_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_i32m1_tu(vd, rs1, rs2, vl); } -vint32m2_t test_vluxei8_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2_t test_vluxei8_v_i32m2_tu(vint32m2_t vd, const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_i32m2_tu(vd, rs1, rs2, vl); } -vint32m4_t test_vluxei8_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4_t test_vluxei8_v_i32m4_tu(vint32m4_t vd, const int32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_i32m4_tu(vd, rs1, rs2, vl); } -vint32m8_t test_vluxei8_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, vuint8m2_t rs2, size_t vl) { +vint32m8_t test_vluxei8_v_i32m8_tu(vint32m8_t vd, const int32_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_i32m8_tu(vd, rs1, rs2, vl); } -vint64m1_t test_vluxei8_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1_t test_vluxei8_v_i64m1_tu(vint64m1_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_i64m1_tu(vd, rs1, rs2, vl); } -vint64m2_t test_vluxei8_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2_t test_vluxei8_v_i64m2_tu(vint64m2_t vd, const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_i64m2_tu(vd, rs1, rs2, vl); } -vint64m4_t test_vluxei8_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4_t test_vluxei8_v_i64m4_tu(vint64m4_t vd, const int64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_i64m4_tu(vd, rs1, rs2, vl); } -vint64m8_t test_vluxei8_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, vuint8m1_t rs2, size_t vl) { +vint64m8_t test_vluxei8_v_i64m8_tu(vint64m8_t vd, const int64_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_i64m8_tu(vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei8_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8_t test_vluxei8_v_u8mf8_tu(vuint8mf8_t vd, const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_u8mf8_tu(vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei8_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4_t test_vluxei8_v_u8mf4_tu(vuint8mf4_t vd, const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_u8mf4_tu(vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei8_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2_t test_vluxei8_v_u8mf2_tu(vuint8mf2_t vd, const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_u8mf2_tu(vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei8_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1_t test_vluxei8_v_u8m1_tu(vuint8m1_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_u8m1_tu(vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei8_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2_t test_vluxei8_v_u8m2_tu(vuint8m2_t vd, const uint8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_u8m2_tu(vd, rs1, rs2, vl); } -vuint8m4_t test_vluxei8_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4_t test_vluxei8_v_u8m4_tu(vuint8m4_t vd, const uint8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxei8_v_u8m4_tu(vd, rs1, rs2, vl); } -vuint8m8_t test_vluxei8_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, vuint8m8_t rs2, size_t vl) { +vuint8m8_t test_vluxei8_v_u8m8_tu(vuint8m8_t vd, const uint8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vluxei8_v_u8m8_tu(vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei8_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4_t test_vluxei8_v_u16mf4_tu(vuint16mf4_t vd, const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_u16mf4_tu(vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei8_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2_t test_vluxei8_v_u16mf2_tu(vuint16mf2_t vd, const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_u16mf2_tu(vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei8_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1_t test_vluxei8_v_u16m1_tu(vuint16m1_t vd, const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_u16m1_tu(vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei8_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2_t test_vluxei8_v_u16m2_tu(vuint16m2_t vd, const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_u16m2_tu(vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei8_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4_t test_vluxei8_v_u16m4_tu(vuint16m4_t vd, const uint16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_u16m4_tu(vd, rs1, rs2, vl); } -vuint16m8_t test_vluxei8_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint16m8_t test_vluxei8_v_u16m8_tu(vuint16m8_t vd, const uint16_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxei8_v_u16m8_tu(vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei8_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2_t test_vluxei8_v_u32mf2_tu(vuint32mf2_t vd, const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_u32mf2_tu(vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei8_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1_t test_vluxei8_v_u32m1_tu(vuint32m1_t vd, const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_u32m1_tu(vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei8_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2_t test_vluxei8_v_u32m2_tu(vuint32m2_t vd, const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_u32m2_tu(vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei8_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4_t test_vluxei8_v_u32m4_tu(vuint32m4_t vd, const uint32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_u32m4_tu(vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei8_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint32m8_t test_vluxei8_v_u32m8_tu(vuint32m8_t vd, const uint32_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_u32m8_tu(vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei8_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1_t test_vluxei8_v_u64m1_tu(vuint64m1_t vd, const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxei8_v_u64m1_tu(vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei8_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2_t test_vluxei8_v_u64m2_tu(vuint64m2_t vd, const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxei8_v_u64m2_tu(vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei8_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4_t test_vluxei8_v_u64m4_tu(vuint64m4_t vd, const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxei8_v_u64m4_tu(vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei8_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint64m8_t test_vluxei8_v_u64m8_tu(vuint64m8_t vd, const uint64_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_u64m8_tu(vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei8_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei8_v_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16mf4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei8_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei8_v_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei8_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1_t test_vluxei8_v_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m1_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei8_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2_t test_vluxei8_v_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei8_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4_t test_vluxei8_v_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vluxei8_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint8m4_t rs2, size_t vl) { +vfloat16m8_t test_vluxei8_v_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei8_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei8_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32mf2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei8_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1_t test_vluxei8_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m1_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei8_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2_t test_vluxei8_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei8_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4_t test_vluxei8_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei8_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint8m2_t rs2, size_t vl) { +vfloat32m8_t test_vluxei8_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei8_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1_t test_vluxei8_v_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m1_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei8_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2_t test_vluxei8_v_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei8_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4_t test_vluxei8_v_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei8_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint8m1_t rs2, size_t vl) { +vfloat64m8_t test_vluxei8_v_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei8_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8_t test_vluxei8_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei8_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4_t test_vluxei8_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei8_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2_t test_vluxei8_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf2_tum(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei8_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1_t test_vluxei8_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m1_tum(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei8_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2_t test_vluxei8_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m2_tum(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vluxei8_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4_t test_vluxei8_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m4_tum(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vluxei8_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, vuint8m8_t rs2, size_t vl) { +vint8m8_t test_vluxei8_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei8_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4_t test_vluxei8_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16mf4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei8_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2_t test_vluxei8_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16mf2_tum(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei8_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1_t test_vluxei8_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m1_tum(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei8_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2_t test_vluxei8_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m2_tum(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei8_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4_t test_vluxei8_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m4_tum(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vluxei8_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint8m4_t rs2, size_t vl) { +vint16m8_t test_vluxei8_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei8_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2_t test_vluxei8_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32mf2_tum(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei8_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1_t test_vluxei8_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m1_tum(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei8_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2_t test_vluxei8_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m2_tum(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei8_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4_t test_vluxei8_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m4_tum(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei8_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint8m2_t rs2, size_t vl) { +vint32m8_t test_vluxei8_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m8_tum(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei8_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1_t test_vluxei8_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m1_tum(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei8_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2_t test_vluxei8_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m2_tum(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei8_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4_t test_vluxei8_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m4_tum(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei8_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint8m1_t rs2, size_t vl) { +vint64m8_t test_vluxei8_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8_t test_vluxei8_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4_t test_vluxei8_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2_t test_vluxei8_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1_t test_vluxei8_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m1_tum(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2_t test_vluxei8_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vluxei8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4_t test_vluxei8_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m4_tum(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vluxei8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, vuint8m8_t rs2, size_t vl) { +vuint8m8_t test_vluxei8_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, vuint8m8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4_t test_vluxei8_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16mf4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2_t test_vluxei8_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16mf2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1_t test_vluxei8_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m1_tum(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2_t test_vluxei8_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4_t test_vluxei8_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m4_tum(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vluxei8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint16m8_t test_vluxei8_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2_t test_vluxei8_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32mf2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1_t test_vluxei8_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m1_tum(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2_t test_vluxei8_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4_t test_vluxei8_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m4_tum(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint32m8_t test_vluxei8_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1_t test_vluxei8_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m1_tum(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2_t test_vluxei8_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4_t test_vluxei8_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m4_tum(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint64m8_t test_vluxei8_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei8_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei8_v_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16mf4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei8_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei8_v_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei8_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1_t test_vluxei8_v_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei8_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2_t test_vluxei8_v_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei8_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4_t test_vluxei8_v_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vluxei8_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint8m4_t rs2, size_t vl) { +vfloat16m8_t test_vluxei8_v_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei8_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei8_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32mf2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei8_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1_t test_vluxei8_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei8_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2_t test_vluxei8_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei8_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4_t test_vluxei8_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei8_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint8m2_t rs2, size_t vl) { +vfloat32m8_t test_vluxei8_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei8_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1_t test_vluxei8_v_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m1_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei8_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2_t test_vluxei8_v_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei8_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4_t test_vluxei8_v_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei8_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint8m1_t rs2, size_t vl) { +vfloat64m8_t test_vluxei8_v_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei8_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8_t test_vluxei8_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei8_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4_t test_vluxei8_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei8_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2_t test_vluxei8_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei8_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1_t test_vluxei8_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m1_tumu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei8_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2_t test_vluxei8_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vluxei8_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4_t test_vluxei8_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m4_tumu(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vluxei8_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, vuint8m8_t rs2, size_t vl) { +vint8m8_t test_vluxei8_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei8_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4_t test_vluxei8_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16mf4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei8_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2_t test_vluxei8_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16mf2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei8_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1_t test_vluxei8_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m1_tumu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei8_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2_t test_vluxei8_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei8_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4_t test_vluxei8_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m4_tumu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vluxei8_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint8m4_t rs2, size_t vl) { +vint16m8_t test_vluxei8_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei8_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2_t test_vluxei8_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32mf2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei8_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1_t test_vluxei8_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m1_tumu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei8_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2_t test_vluxei8_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei8_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4_t test_vluxei8_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m4_tumu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei8_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint8m2_t rs2, size_t vl) { +vint32m8_t test_vluxei8_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei8_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1_t test_vluxei8_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m1_tumu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei8_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2_t test_vluxei8_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei8_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4_t test_vluxei8_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m4_tumu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei8_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint8m1_t rs2, size_t vl) { +vint64m8_t test_vluxei8_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8_t test_vluxei8_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4_t test_vluxei8_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2_t test_vluxei8_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1_t test_vluxei8_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m1_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2_t test_vluxei8_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vluxei8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4_t test_vluxei8_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vluxei8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, vuint8m8_t rs2, size_t vl) { +vuint8m8_t test_vluxei8_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, vuint8m8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4_t test_vluxei8_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16mf4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2_t test_vluxei8_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1_t test_vluxei8_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m1_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2_t test_vluxei8_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4_t test_vluxei8_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vluxei8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint16m8_t test_vluxei8_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2_t test_vluxei8_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32mf2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1_t test_vluxei8_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m1_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2_t test_vluxei8_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4_t test_vluxei8_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint32m8_t test_vluxei8_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1_t test_vluxei8_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m1_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2_t test_vluxei8_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4_t test_vluxei8_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint64m8_t test_vluxei8_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4_t test_vluxei8_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4_t test_vluxei8_v_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + const _Float16 *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16mf4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2_t test_vluxei8_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2_t test_vluxei8_v_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + const _Float16 *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1_t test_vluxei8_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1_t test_vluxei8_v_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + const _Float16 *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m1_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2_t test_vluxei8_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2_t test_vluxei8_v_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + const _Float16 *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4_t test_vluxei8_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4_t test_vluxei8_v_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + const _Float16 *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m8_t test_vluxei8_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, const _Float16 *rs1, vuint8m4_t rs2, size_t vl) { +vfloat16m8_t test_vluxei8_v_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + const _Float16 *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f16m8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2_t test_vluxei8_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2_t test_vluxei8_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + const float *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32mf2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1_t test_vluxei8_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1_t test_vluxei8_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m1_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2_t test_vluxei8_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2_t test_vluxei8_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4_t test_vluxei8_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4_t test_vluxei8_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m8_t test_vluxei8_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, const float *rs1, vuint8m2_t rs2, size_t vl) { +vfloat32m8_t test_vluxei8_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + const float *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f32m8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1_t test_vluxei8_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1_t test_vluxei8_v_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m1_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2_t test_vluxei8_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2_t test_vluxei8_v_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4_t test_vluxei8_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4_t test_vluxei8_v_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m8_t test_vluxei8_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, const double *rs1, vuint8m1_t rs2, size_t vl) { +vfloat64m8_t test_vluxei8_v_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + const double *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_f64m8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8_t test_vluxei8_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8_t test_vluxei8_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4_t test_vluxei8_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4_t test_vluxei8_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2_t test_vluxei8_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2_t test_vluxei8_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i8mf2_mu(vm, vd, rs1, rs2, vl); } -vint8m1_t test_vluxei8_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1_t test_vluxei8_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m1_mu(vm, vd, rs1, rs2, vl); } -vint8m2_t test_vluxei8_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2_t test_vluxei8_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m2_mu(vm, vd, rs1, rs2, vl); } -vint8m4_t test_vluxei8_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4_t test_vluxei8_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m4_mu(vm, vd, rs1, rs2, vl); } -vint8m8_t test_vluxei8_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, vuint8m8_t rs2, size_t vl) { +vint8m8_t test_vluxei8_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, const int8_t *rs1, + vuint8m8_t rs2, size_t vl) { return __riscv_vluxei8_v_i8m8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4_t test_vluxei8_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4_t test_vluxei8_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + const int16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16mf4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2_t test_vluxei8_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2_t test_vluxei8_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + const int16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16mf2_mu(vm, vd, rs1, rs2, vl); } -vint16m1_t test_vluxei8_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1_t test_vluxei8_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m1_mu(vm, vd, rs1, rs2, vl); } -vint16m2_t test_vluxei8_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2_t test_vluxei8_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m2_mu(vm, vd, rs1, rs2, vl); } -vint16m4_t test_vluxei8_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4_t test_vluxei8_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m4_mu(vm, vd, rs1, rs2, vl); } -vint16m8_t test_vluxei8_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, const int16_t *rs1, vuint8m4_t rs2, size_t vl) { +vint16m8_t test_vluxei8_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, + const int16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i16m8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2_t test_vluxei8_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2_t test_vluxei8_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + const int32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32mf2_mu(vm, vd, rs1, rs2, vl); } -vint32m1_t test_vluxei8_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1_t test_vluxei8_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m1_mu(vm, vd, rs1, rs2, vl); } -vint32m2_t test_vluxei8_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2_t test_vluxei8_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m2_mu(vm, vd, rs1, rs2, vl); } -vint32m4_t test_vluxei8_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4_t test_vluxei8_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m4_mu(vm, vd, rs1, rs2, vl); } -vint32m8_t test_vluxei8_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, const int32_t *rs1, vuint8m2_t rs2, size_t vl) { +vint32m8_t test_vluxei8_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, + const int32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i32m8_mu(vm, vd, rs1, rs2, vl); } -vint64m1_t test_vluxei8_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1_t test_vluxei8_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m1_mu(vm, vd, rs1, rs2, vl); } -vint64m2_t test_vluxei8_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2_t test_vluxei8_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m2_mu(vm, vd, rs1, rs2, vl); } -vint64m4_t test_vluxei8_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4_t test_vluxei8_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m4_mu(vm, vd, rs1, rs2, vl); } -vint64m8_t test_vluxei8_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, const int64_t *rs1, vuint8m1_t rs2, size_t vl) { +vint64m8_t test_vluxei8_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, + const int64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_i64m8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8_t test_vluxei8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8_t test_vluxei8_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4_t test_vluxei8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4_t test_vluxei8_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2_t test_vluxei8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2_t test_vluxei8_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8mf2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1_t test_vluxei8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1_t test_vluxei8_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m1_mu(vm, vd, rs1, rs2, vl); } -vuint8m2_t test_vluxei8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2_t test_vluxei8_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4_t test_vluxei8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4_t test_vluxei8_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m4_mu(vm, vd, rs1, rs2, vl); } -vuint8m8_t test_vluxei8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, const uint8_t *rs1, vuint8m8_t rs2, size_t vl) { +vuint8m8_t test_vluxei8_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, + const uint8_t *rs1, vuint8m8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u8m8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4_t test_vluxei8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4_t test_vluxei8_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + const uint16_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16mf4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2_t test_vluxei8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2_t test_vluxei8_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + const uint16_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16mf2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1_t test_vluxei8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1_t test_vluxei8_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + const uint16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m1_mu(vm, vd, rs1, rs2, vl); } -vuint16m2_t test_vluxei8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2_t test_vluxei8_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4_t test_vluxei8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4_t test_vluxei8_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m4_mu(vm, vd, rs1, rs2, vl); } -vuint16m8_t test_vluxei8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, const uint16_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint16m8_t test_vluxei8_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + const uint16_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u16m8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2_t test_vluxei8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2_t test_vluxei8_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + const uint32_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32mf2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1_t test_vluxei8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1_t test_vluxei8_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + const uint32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m1_mu(vm, vd, rs1, rs2, vl); } -vuint32m2_t test_vluxei8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2_t test_vluxei8_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + const uint32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4_t test_vluxei8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4_t test_vluxei8_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m4_mu(vm, vd, rs1, rs2, vl); } -vuint32m8_t test_vluxei8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, const uint32_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint32m8_t test_vluxei8_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + const uint32_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u32m8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1_t test_vluxei8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1_t test_vluxei8_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + const uint64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m1_mu(vm, vd, rs1, rs2, vl); } -vuint64m2_t test_vluxei8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2_t test_vluxei8_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + const uint64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4_t test_vluxei8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4_t test_vluxei8_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + const uint64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m4_mu(vm, vd, rs1, rs2, vl); } -vuint64m8_t test_vluxei8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, const uint64_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint64m8_t test_vluxei8_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + const uint64_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxei8_v_u64m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c index 7dfa549f0..2eadd13d2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c @@ -6,770 +6,1148 @@ #include -vfloat16mf4x2_t test_vluxseg2ei16_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei16_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei16_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei16_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei16_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei16_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei16_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei16_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei16_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei16_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m4x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei16_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei16_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei16_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei16_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei16_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei16_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei16_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei16_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei16_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei16_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei16_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei16_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei16_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei16_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei16_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei16_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei16_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei16_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei16_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei16_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei16_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei16_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei16_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei16_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8m2x2_tu(vd, rs1, rs2, vl); } -vint8m4x2_t test_vluxseg2ei16_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4x2_t test_vluxseg2ei16_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8m4x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei16_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei16_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei16_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei16_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei16_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei16_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei16_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei16_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei16_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei16_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m4x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei16_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei16_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei16_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei16_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei16_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei16_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei16_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei16_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei16_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei16_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei16_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei16_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei16_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei16_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei16_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei16_v_u8mf8x2_tu(vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei16_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei16_v_u8mf4x2_tu(vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei16_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei16_v_u8mf2x2_tu(vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei16_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei16_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei16_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei16_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8m2x2_tu(vd, rs1, rs2, vl); } -vuint8m4x2_t test_vluxseg2ei16_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4x2_t test_vluxseg2ei16_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8m4x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei16_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei16_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei16_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei16_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei16_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei16_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei16_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei16_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei16_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei16_v_u16m4x2_tu(vuint16m4x2_t vd, + const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m4x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei16_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei16_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei16_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei16_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei16_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei16_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei16_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei16_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei16_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei16_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei16_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei16_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei16_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei16_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei16_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei16_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei16_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei16_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei16_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei16_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei16_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei16_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei16_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei16_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei16_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei16_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei16_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei16_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei16_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei16_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei16_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei16_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei16_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei16_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei16_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei16_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei16_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei16_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei16_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei16_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei16_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei16_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei16_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei16_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei16_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei16_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei16_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei16_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vluxseg2ei16_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4x2_t test_vluxseg2ei16_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei16_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei16_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei16_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei16_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei16_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei16_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei16_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei16_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei16_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei16_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m4x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei16_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei16_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei16_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei16_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei16_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei16_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei16_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei16_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei16_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei16_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei16_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei16_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei16_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei16_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei16_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei16_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei16_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei16_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei16_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei16_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei16_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei16_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei16_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei16_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_u8m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vluxseg2ei16_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4x2_t test_vluxseg2ei16_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_u8m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei16_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei16_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei16_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei16_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei16_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei16_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei16_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei16_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei16_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei16_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei16_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei16_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei16_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei16_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei16_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei16_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei16_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei16_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei16_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei16_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei16_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei16_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei16_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei16_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei16_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei16_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei16_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei16_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei16_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei16_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei16_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei16_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei16_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei16_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei16_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei16_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei16_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei16_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei16_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei16_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei16_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei16_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei16_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei16_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei16_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei16_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei16_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei16_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei16_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei16_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei16_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei16_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei16_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei16_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei16_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei16_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei16_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei16_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vluxseg2ei16_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4x2_t test_vluxseg2ei16_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei16_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei16_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei16_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei16_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei16_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei16_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei16_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei16_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei16_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei16_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei16_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei16_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei16_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei16_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei16_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei16_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei16_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei16_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei16_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei16_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei16_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei16_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei16_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei16_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei16_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei16_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei16_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei16_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei16_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei16_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei16_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei16_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei16_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei16_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vluxseg2ei16_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4x2_t test_vluxseg2ei16_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei16_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei16_v_u16mf4x2_tumu(vbool64_t vm, + vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei16_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei16_v_u16mf2x2_tumu(vbool32_t vm, + vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei16_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei16_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei16_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei16_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei16_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei16_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei16_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei16_v_u32mf2x2_tumu(vbool64_t vm, + vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei16_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei16_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei16_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei16_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei16_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei16_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei16_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei16_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei16_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei16_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei16_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei16_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei16_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei16_v_f16mf4x2_mu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei16_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei16_v_f16mf2x2_mu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei16_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei16_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei16_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei16_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei16_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint16m4_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei16_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f16m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei16_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei16_v_f32mf2x2_mu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei16_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei16_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei16_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei16_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei16_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint16m2_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei16_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei16_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei16_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei16_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei16_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei16_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint16m1_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei16_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei16_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei16_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei16_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei16_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei16_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei16_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei16_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei16_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei16_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei16_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vluxseg2ei16_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint16m8_t rs2, size_t vl) { +vint8m4x2_t test_vluxseg2ei16_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i8m4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei16_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei16_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei16_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei16_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei16_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei16_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei16_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei16_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei16_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint16m4_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei16_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i16m4x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei16_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei16_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei16_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei16_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei16_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei16_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei16_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint16m2_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei16_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei16_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei16_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei16_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei16_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei16_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint16m1_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei16_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei16_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei16_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei16_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei16_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei16_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei16_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei16_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei16_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei16_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei16_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_u8m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vluxseg2ei16_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint16m8_t rs2, size_t vl) { +vuint8m4x2_t test_vluxseg2ei16_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_u8m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei16_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei16_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei16_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei16_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei16_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei16_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei16_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei16_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei16_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei16_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u16m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei16_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei16_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei16_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei16_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei16_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei16_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei16_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei16_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei16_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei16_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei16_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei16_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei16_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei16_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c index 0d6f8f6d2..7725f4999 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c @@ -6,738 +6,1102 @@ #include -vfloat16mf4x2_t test_vluxseg2ei32_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei32_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei32_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei32_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei32_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei32_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei32_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei32_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei32_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei32_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m4x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei32_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei32_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei32_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei32_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei32_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei32_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei32_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei32_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei32_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei32_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei32_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei32_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei32_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei32_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei32_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei32_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei32_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei32_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei32_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei32_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei32_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei32_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei32_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei32_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8m2x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei32_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei32_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei32_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei32_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei32_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei32_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei32_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei32_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei32_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei32_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m4x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei32_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei32_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei32_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei32_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei32_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei32_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei32_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei32_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei32_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei32_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei32_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei32_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei32_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei32_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei32_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei32_v_u8mf8x2_tu(vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei32_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei32_v_u8mf4x2_tu(vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei32_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei32_v_u8mf2x2_tu(vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei32_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei32_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei32_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei32_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8m2x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei32_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei32_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei32_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei32_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei32_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei32_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei32_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei32_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei32_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei32_v_u16m4x2_tu(vuint16m4x2_t vd, + const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m4x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei32_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei32_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei32_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei32_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei32_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei32_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei32_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei32_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei32_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei32_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei32_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei32_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei32_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei32_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei32_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei32_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei32_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei32_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei32_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei32_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei32_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei32_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei32_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei32_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei32_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei32_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei32_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei32_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei32_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei32_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei32_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei32_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei32_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei32_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei32_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei32_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei32_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei32_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei32_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei32_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei32_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei32_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei32_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei32_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei32_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei32_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei32_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei32_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei32_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei32_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei32_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei32_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei32_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei32_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei32_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei32_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei32_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei32_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m4x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei32_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei32_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei32_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei32_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei32_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei32_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei32_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei32_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei32_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei32_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei32_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei32_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei32_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei32_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei32_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei32_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei32_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei32_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei32_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei32_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei32_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei32_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei32_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei32_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_u8m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei32_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei32_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei32_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei32_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei32_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei32_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei32_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei32_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei32_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei32_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei32_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei32_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei32_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei32_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei32_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei32_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei32_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei32_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei32_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei32_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei32_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei32_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei32_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei32_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei32_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei32_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei32_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei32_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei32_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei32_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei32_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei32_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei32_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei32_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei32_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei32_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei32_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei32_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei32_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei32_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei32_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei32_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei32_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei32_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei32_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei32_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei32_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei32_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei32_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei32_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei32_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei32_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei32_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei32_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei32_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei32_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei32_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei32_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei32_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei32_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei32_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei32_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei32_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei32_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei32_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei32_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei32_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei32_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei32_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei32_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei32_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei32_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei32_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei32_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei32_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei32_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei32_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei32_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei32_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei32_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei32_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei32_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei32_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei32_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei32_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei32_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei32_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei32_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei32_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei32_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei32_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei32_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei32_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei32_v_u16mf4x2_tumu(vbool64_t vm, + vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei32_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei32_v_u16mf2x2_tumu(vbool32_t vm, + vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei32_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei32_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei32_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei32_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei32_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei32_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei32_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei32_v_u32mf2x2_tumu(vbool64_t vm, + vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei32_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei32_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei32_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei32_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei32_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei32_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei32_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei32_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei32_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei32_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei32_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei32_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei32_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei32_v_f16mf4x2_mu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei32_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei32_v_f16mf2x2_mu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei32_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei32_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei32_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei32_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei32_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint32m8_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei32_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f16m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei32_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei32_v_f32mf2x2_mu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei32_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei32_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei32_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei32_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei32_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint32m4_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei32_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei32_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei32_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei32_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei32_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei32_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint32m2_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei32_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei32_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei32_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei32_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei32_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei32_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei32_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei32_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei32_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei32_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei32_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i8m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei32_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei32_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei32_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei32_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei32_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei32_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei32_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei32_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei32_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint32m8_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei32_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i16m4x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei32_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei32_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei32_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei32_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei32_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei32_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei32_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint32m4_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei32_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei32_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei32_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei32_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei32_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei32_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint32m2_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei32_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei32_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei32_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei32_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei32_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei32_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei32_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei32_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei32_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei32_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei32_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei32_v_u8m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei32_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei32_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei32_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei32_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei32_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei32_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei32_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei32_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei32_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei32_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u16m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei32_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei32_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei32_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei32_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei32_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei32_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei32_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei32_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei32_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei32_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei32_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei32_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei32_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei32_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg2ei32_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c index f2f86775c..5090820ae 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c @@ -6,658 +6,985 @@ #include -vfloat16mf4x2_t test_vluxseg2ei64_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei64_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei64_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei64_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei64_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei64_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei64_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei64_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei64_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei64_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei64_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei64_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei64_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei64_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei64_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei64_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei64_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei64_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei64_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei64_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei64_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei64_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei64_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei64_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei64_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei64_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei64_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei64_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei64_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei64_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei64_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei64_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei64_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei64_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei64_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei64_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei64_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei64_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei64_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei64_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei64_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei64_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei64_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei64_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei64_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei64_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei64_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei64_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei64_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei64_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei64_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei64_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei64_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei64_v_u8mf8x2_tu(vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei64_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei64_v_u8mf4x2_tu(vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei64_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei64_v_u8mf2x2_tu(vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei64_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei64_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei64_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei64_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei64_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei64_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei64_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei64_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei64_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei64_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei64_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei64_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei64_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei64_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei64_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei64_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei64_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei64_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei64_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei64_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei64_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei64_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei64_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei64_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei64_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei64_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei64_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei64_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei64_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei64_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei64_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei64_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei64_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei64_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei64_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei64_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei64_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei64_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei64_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei64_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei64_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei64_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei64_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei64_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei64_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei64_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei64_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei64_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei64_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei64_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei64_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei64_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei64_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei64_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei64_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei64_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei64_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei64_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei64_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei64_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei64_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei64_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei64_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei64_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei64_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei64_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei64_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei64_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei64_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei64_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei64_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei64_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei64_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei64_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei64_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei64_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei64_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei64_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei64_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei64_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei64_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei64_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei64_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei64_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei64_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei64_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei64_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei64_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei64_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei64_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei64_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei64_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei64_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei64_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei64_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei64_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei64_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei64_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei64_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei64_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei64_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei64_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei64_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei64_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei64_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei64_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei64_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei64_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei64_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei64_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei64_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei64_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei64_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei64_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei64_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei64_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei64_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei64_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei64_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei64_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei64_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei64_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei64_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei64_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei64_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei64_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei64_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei64_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei64_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei64_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei64_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei64_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei64_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei64_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei64_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei64_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei64_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei64_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei64_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei64_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei64_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei64_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei64_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei64_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei64_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei64_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei64_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei64_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei64_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei64_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei64_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei64_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei64_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei64_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei64_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei64_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei64_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei64_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei64_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei64_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei64_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei64_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei64_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei64_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei64_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei64_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei64_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei64_v_u16mf4x2_tumu(vbool64_t vm, + vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei64_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei64_v_u16mf2x2_tumu(vbool32_t vm, + vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei64_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei64_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei64_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei64_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei64_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei64_v_u32mf2x2_tumu(vbool64_t vm, + vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei64_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei64_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei64_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei64_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei64_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei64_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei64_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei64_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei64_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei64_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei64_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei64_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei64_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei64_v_f16mf4x2_mu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei64_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei64_v_f16mf2x2_mu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei64_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei64_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei64_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei64_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei64_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei64_v_f32mf2x2_mu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei64_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei64_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei64_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei64_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei64_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint64m8_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei64_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei64_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei64_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei64_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei64_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei64_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint64m4_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei64_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei64_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei64_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei64_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei64_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei64_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei64_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei64_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei64_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei64_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei64_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei64_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei64_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei64_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei64_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei64_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei64_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei64_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei64_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei64_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei64_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei64_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei64_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei64_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint64m8_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei64_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei64_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei64_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei64_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei64_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei64_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint64m4_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei64_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei64_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei64_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei64_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei64_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei64_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei64_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei64_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei64_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg2ei64_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei64_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei64_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei64_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei64_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei64_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei64_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei64_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei64_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei64_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei64_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei64_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei64_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei64_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei64_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei64_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei64_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei64_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei64_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei64_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei64_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei64_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei64_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg2ei64_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c index cfe68148e..fcf0f0b4c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c @@ -6,770 +6,1142 @@ #include -vfloat16mf4x2_t test_vluxseg2ei8_v_f16mf4x2_tu(vfloat16mf4x2_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei8_v_f16mf4x2_tu(vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16mf4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei8_v_f16mf2x2_tu(vfloat16mf2x2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei8_v_f16mf2x2_tu(vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16mf2x2_tu(vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei8_v_f16m1x2_tu(vfloat16m1x2_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei8_v_f16m1x2_tu(vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m1x2_tu(vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei8_v_f16m2x2_tu(vfloat16m2x2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei8_v_f16m2x2_tu(vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m2x2_tu(vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei8_v_f16m4x2_tu(vfloat16m4x2_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei8_v_f16m4x2_tu(vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m4x2_tu(vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei8_v_f32mf2x2_tu(vfloat32mf2x2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei8_v_f32mf2x2_tu(vfloat32mf2x2_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f32mf2x2_tu(vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei8_v_f32m1x2_tu(vfloat32m1x2_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei8_v_f32m1x2_tu(vfloat32m1x2_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m1x2_tu(vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei8_v_f32m2x2_tu(vfloat32m2x2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei8_v_f32m2x2_tu(vfloat32m2x2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m2x2_tu(vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei8_v_f32m4x2_tu(vfloat32m4x2_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei8_v_f32m4x2_tu(vfloat32m4x2_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m4x2_tu(vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei8_v_f64m1x2_tu(vfloat64m1x2_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei8_v_f64m1x2_tu(vfloat64m1x2_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f64m1x2_tu(vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei8_v_f64m2x2_tu(vfloat64m2x2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei8_v_f64m2x2_tu(vfloat64m2x2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f64m2x2_tu(vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei8_v_f64m4x2_tu(vfloat64m4x2_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei8_v_f64m4x2_tu(vfloat64m4x2_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f64m4x2_tu(vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei8_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei8_v_i8mf8x2_tu(vint8mf8x2_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i8mf8x2_tu(vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei8_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei8_v_i8mf4x2_tu(vint8mf4x2_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i8mf4x2_tu(vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei8_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei8_v_i8mf2x2_tu(vint8mf2x2_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i8mf2x2_tu(vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei8_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei8_v_i8m1x2_tu(vint8m1x2_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i8m1x2_tu(vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei8_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei8_v_i8m2x2_tu(vint8m2x2_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i8m2x2_tu(vd, rs1, rs2, vl); } -vint8m4x2_t test_vluxseg2ei8_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4x2_t test_vluxseg2ei8_v_i8m4x2_tu(vint8m4x2_t vd, const int8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i8m4x2_tu(vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei8_v_i16mf4x2_tu(vint16mf4x2_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei8_v_i16mf4x2_tu(vint16mf4x2_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16mf4x2_tu(vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei8_v_i16mf2x2_tu(vint16mf2x2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei8_v_i16mf2x2_tu(vint16mf2x2_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16mf2x2_tu(vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei8_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei8_v_i16m1x2_tu(vint16m1x2_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16m1x2_tu(vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei8_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei8_v_i16m2x2_tu(vint16m2x2_t vd, const int16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16m2x2_tu(vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei8_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei8_v_i16m4x2_tu(vint16m4x2_t vd, const int16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16m4x2_tu(vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei8_v_i32mf2x2_tu(vint32mf2x2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei8_v_i32mf2x2_tu(vint32mf2x2_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32mf2x2_tu(vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei8_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei8_v_i32m1x2_tu(vint32m1x2_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32m1x2_tu(vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei8_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei8_v_i32m2x2_tu(vint32m2x2_t vd, const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32m2x2_tu(vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei8_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei8_v_i32m4x2_tu(vint32m4x2_t vd, const int32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32m4x2_tu(vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei8_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei8_v_i64m1x2_tu(vint64m1x2_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i64m1x2_tu(vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei8_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei8_v_i64m2x2_tu(vint64m2x2_t vd, const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i64m2x2_tu(vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei8_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei8_v_i64m4x2_tu(vint64m4x2_t vd, const int64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i64m4x2_tu(vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei8_v_u8mf8x2_tu(vuint8mf8x2_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei8_v_u8mf8x2_tu(vuint8mf8x2_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8mf8x2_tu(vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei8_v_u8mf4x2_tu(vuint8mf4x2_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei8_v_u8mf4x2_tu(vuint8mf4x2_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8mf4x2_tu(vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei8_v_u8mf2x2_tu(vuint8mf2x2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei8_v_u8mf2x2_tu(vuint8mf2x2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8mf2x2_tu(vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei8_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei8_v_u8m1x2_tu(vuint8m1x2_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8m1x2_tu(vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei8_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei8_v_u8m2x2_tu(vuint8m2x2_t vd, const uint8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8m2x2_tu(vd, rs1, rs2, vl); } -vuint8m4x2_t test_vluxseg2ei8_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4x2_t test_vluxseg2ei8_v_u8m4x2_tu(vuint8m4x2_t vd, const uint8_t *rs1, + vuint8m4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8m4x2_tu(vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei8_v_u16mf4x2_tu(vuint16mf4x2_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei8_v_u16mf4x2_tu(vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16mf4x2_tu(vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei8_v_u16mf2x2_tu(vuint16mf2x2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei8_v_u16mf2x2_tu(vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16mf2x2_tu(vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei8_v_u16m1x2_tu(vuint16m1x2_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei8_v_u16m1x2_tu(vuint16m1x2_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16m1x2_tu(vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei8_v_u16m2x2_tu(vuint16m2x2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei8_v_u16m2x2_tu(vuint16m2x2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u16m2x2_tu(vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei8_v_u16m4x2_tu(vuint16m4x2_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei8_v_u16m4x2_tu(vuint16m4x2_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u16m4x2_tu(vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei8_v_u32mf2x2_tu(vuint32mf2x2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei8_v_u32mf2x2_tu(vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32mf2x2_tu(vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei8_v_u32m1x2_tu(vuint32m1x2_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei8_v_u32m1x2_tu(vuint32m1x2_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m1x2_tu(vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei8_v_u32m2x2_tu(vuint32m2x2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei8_v_u32m2x2_tu(vuint32m2x2_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m2x2_tu(vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei8_v_u32m4x2_tu(vuint32m4x2_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei8_v_u32m4x2_tu(vuint32m4x2_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u32m4x2_tu(vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei8_v_u64m1x2_tu(vuint64m1x2_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei8_v_u64m1x2_tu(vuint64m1x2_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m1x2_tu(vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei8_v_u64m2x2_tu(vuint64m2x2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei8_v_u64m2x2_tu(vuint64m2x2_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m2x2_tu(vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei8_v_u64m4x2_tu(vuint64m4x2_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei8_v_u64m4x2_tu(vuint64m4x2_t vd, + const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m4x2_tu(vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei8_v_f16mf4x2_tum(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei8_v_f16mf4x2_tum(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei8_v_f16mf2x2_tum(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei8_v_f16mf2x2_tum(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei8_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei8_v_f16m1x2_tum(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei8_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei8_v_f16m2x2_tum(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei8_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei8_v_f16m4x2_tum(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei8_v_f32mf2x2_tum(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei8_v_f32mf2x2_tum(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei8_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei8_v_f32m1x2_tum(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei8_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei8_v_f32m2x2_tum(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei8_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei8_v_f32m4x2_tum(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei8_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei8_v_f64m1x2_tum(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f64m1x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei8_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei8_v_f64m2x2_tum(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f64m2x2_tum(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei8_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei8_v_f64m4x2_tum(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f64m4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei8_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei8_v_i8mf8x2_tum(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei8_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei8_v_i8mf4x2_tum(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei8_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei8_v_i8mf2x2_tum(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei8_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei8_v_i8m1x2_tum(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m1x2_tum(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei8_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei8_v_i8m2x2_tum(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m2x2_tum(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vluxseg2ei8_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4x2_t test_vluxseg2ei8_v_i8m4x2_tum(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei8_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei8_v_i16mf4x2_tum(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei8_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei8_v_i16mf2x2_tum(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei8_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei8_v_i16m1x2_tum(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i16m1x2_tum(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei8_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei8_v_i16m2x2_tum(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i16m2x2_tum(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei8_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei8_v_i16m4x2_tum(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i16m4x2_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei8_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei8_v_i32mf2x2_tum(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei8_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei8_v_i32m1x2_tum(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i32m1x2_tum(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei8_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei8_v_i32m2x2_tum(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i32m2x2_tum(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei8_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei8_v_i32m4x2_tum(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i32m4x2_tum(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei8_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei8_v_i64m1x2_tum(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i64m1x2_tum(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei8_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei8_v_i64m2x2_tum(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i64m2x2_tum(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei8_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei8_v_i64m4x2_tum(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i64m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei8_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei8_v_u8mf8x2_tum(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8mf8x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei8_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei8_v_u8mf4x2_tum(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei8_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei8_v_u8mf2x2_tum(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei8_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei8_v_u8m1x2_tum(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei8_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei8_v_u8m2x2_tum(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vluxseg2ei8_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4x2_t test_vluxseg2ei8_v_u8m4x2_tum(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei8_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei8_v_u16mf4x2_tum(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei8_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei8_v_u16mf2x2_tum(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei8_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei8_v_u16m1x2_tum(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei8_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei8_v_u16m2x2_tum(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei8_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei8_v_u16m4x2_tum(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei8_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei8_v_u32mf2x2_tum(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32mf2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei8_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei8_v_u32m1x2_tum(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei8_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei8_v_u32m2x2_tum(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei8_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei8_v_u32m4x2_tum(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m4x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei8_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei8_v_u64m1x2_tum(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m1x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei8_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei8_v_u64m2x2_tum(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m2x2_tum(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei8_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei8_v_u64m4x2_tum(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m4x2_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei8_v_f16mf4x2_tumu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei8_v_f16mf4x2_tumu(vbool64_t vm, + vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei8_v_f16mf2x2_tumu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei8_v_f16mf2x2_tumu(vbool32_t vm, + vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei8_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei8_v_f16m1x2_tumu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei8_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei8_v_f16m2x2_tumu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei8_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei8_v_f16m4x2_tumu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei8_v_f32mf2x2_tumu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei8_v_f32mf2x2_tumu(vbool64_t vm, + vfloat32mf2x2_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei8_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei8_v_f32m1x2_tumu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei8_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei8_v_f32m2x2_tumu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei8_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei8_v_f32m4x2_tumu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei8_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei8_v_f64m1x2_tumu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei8_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei8_v_f64m2x2_tumu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei8_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei8_v_f64m4x2_tumu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei8_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei8_v_i8mf8x2_tumu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei8_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei8_v_i8mf4x2_tumu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei8_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei8_v_i8mf2x2_tumu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei8_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei8_v_i8m1x2_tumu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei8_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei8_v_i8m2x2_tumu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vluxseg2ei8_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4x2_t test_vluxseg2ei8_v_i8m4x2_tumu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei8_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei8_v_i16mf4x2_tumu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei8_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei8_v_i16mf2x2_tumu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei8_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei8_v_i16m1x2_tumu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei8_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei8_v_i16m2x2_tumu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei8_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei8_v_i16m4x2_tumu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei8_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei8_v_i32mf2x2_tumu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei8_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei8_v_i32m1x2_tumu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei8_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei8_v_i32m2x2_tumu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei8_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei8_v_i32m4x2_tumu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei8_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei8_v_i64m1x2_tumu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei8_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei8_v_i64m2x2_tumu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei8_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei8_v_i64m4x2_tumu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei8_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei8_v_u8mf8x2_tumu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8mf8x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei8_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei8_v_u8mf4x2_tumu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei8_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei8_v_u8mf2x2_tumu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u8mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei8_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei8_v_u8m1x2_tumu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei8_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei8_v_u8m2x2_tumu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vluxseg2ei8_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4x2_t test_vluxseg2ei8_v_u8m4x2_tumu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei8_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei8_v_u16mf4x2_tumu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei8_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei8_v_u16mf2x2_tumu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei8_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei8_v_u16m1x2_tumu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei8_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei8_v_u16m2x2_tumu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei8_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei8_v_u16m4x2_tumu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei8_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei8_v_u32mf2x2_tumu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei8_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei8_v_u32m1x2_tumu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei8_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei8_v_u32m2x2_tumu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei8_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei8_v_u32m4x2_tumu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m4x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei8_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei8_v_u64m1x2_tumu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m1x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei8_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei8_v_u64m2x2_tumu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m2x2_tumu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei8_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei8_v_u64m4x2_tumu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m4x2_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x2_t test_vluxseg2ei8_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x2_t test_vluxseg2ei8_v_f16mf4x2_mu(vbool64_t vm, vfloat16mf4x2_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x2_t test_vluxseg2ei8_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x2_t test_vluxseg2ei8_v_f16mf2x2_mu(vbool32_t vm, vfloat16mf2x2_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x2_t test_vluxseg2ei8_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x2_t test_vluxseg2ei8_v_f16m1x2_mu(vbool16_t vm, vfloat16m1x2_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x2_t test_vluxseg2ei8_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x2_t test_vluxseg2ei8_v_f16m2x2_mu(vbool8_t vm, vfloat16m2x2_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat16m4x2_t test_vluxseg2ei8_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, const _Float16 *rs1, vuint8m2_t rs2, size_t vl) { +vfloat16m4x2_t test_vluxseg2ei8_v_f16m4x2_mu(vbool4_t vm, vfloat16m4x2_t vd, + const _Float16 *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f16m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x2_t test_vluxseg2ei8_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x2_t test_vluxseg2ei8_v_f32mf2x2_mu(vbool64_t vm, vfloat32mf2x2_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_f32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x2_t test_vluxseg2ei8_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x2_t test_vluxseg2ei8_v_f32m1x2_mu(vbool32_t vm, vfloat32m1x2_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x2_t test_vluxseg2ei8_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x2_t test_vluxseg2ei8_v_f32m2x2_mu(vbool16_t vm, vfloat32m2x2_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat32m4x2_t test_vluxseg2ei8_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, const float *rs1, vuint8m1_t rs2, size_t vl) { +vfloat32m4x2_t test_vluxseg2ei8_v_f32m4x2_mu(vbool8_t vm, vfloat32m4x2_t vd, + const float *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f32m4x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x2_t test_vluxseg2ei8_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x2_t test_vluxseg2ei8_v_f64m1x2_mu(vbool64_t vm, vfloat64m1x2_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f64m1x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x2_t test_vluxseg2ei8_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x2_t test_vluxseg2ei8_v_f64m2x2_mu(vbool32_t vm, vfloat64m2x2_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f64m2x2_mu(vm, vd, rs1, rs2, vl); } -vfloat64m4x2_t test_vluxseg2ei8_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, const double *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat64m4x2_t test_vluxseg2ei8_v_f64m4x2_mu(vbool16_t vm, vfloat64m4x2_t vd, + const double *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_f64m4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x2_t test_vluxseg2ei8_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x2_t test_vluxseg2ei8_v_i8mf8x2_mu(vbool64_t vm, vint8mf8x2_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x2_t test_vluxseg2ei8_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x2_t test_vluxseg2ei8_v_i8mf4x2_mu(vbool32_t vm, vint8mf4x2_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x2_t test_vluxseg2ei8_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x2_t test_vluxseg2ei8_v_i8mf2x2_mu(vbool16_t vm, vint8mf2x2_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m1x2_t test_vluxseg2ei8_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x2_t test_vluxseg2ei8_v_i8m1x2_mu(vbool8_t vm, vint8m1x2_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m1x2_mu(vm, vd, rs1, rs2, vl); } -vint8m2x2_t test_vluxseg2ei8_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x2_t test_vluxseg2ei8_v_i8m2x2_mu(vbool4_t vm, vint8m2x2_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m2x2_mu(vm, vd, rs1, rs2, vl); } -vint8m4x2_t test_vluxseg2ei8_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, const int8_t *rs1, vuint8m4_t rs2, size_t vl) { +vint8m4x2_t test_vluxseg2ei8_v_i8m4x2_mu(vbool2_t vm, vint8m4x2_t vd, + const int8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i8m4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x2_t test_vluxseg2ei8_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x2_t test_vluxseg2ei8_v_i16mf4x2_mu(vbool64_t vm, vint16mf4x2_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x2_t test_vluxseg2ei8_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x2_t test_vluxseg2ei8_v_i16mf2x2_mu(vbool32_t vm, vint16mf2x2_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m1x2_t test_vluxseg2ei8_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x2_t test_vluxseg2ei8_v_i16m1x2_mu(vbool16_t vm, vint16m1x2_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i16m1x2_mu(vm, vd, rs1, rs2, vl); } -vint16m2x2_t test_vluxseg2ei8_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x2_t test_vluxseg2ei8_v_i16m2x2_mu(vbool8_t vm, vint16m2x2_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i16m2x2_mu(vm, vd, rs1, rs2, vl); } -vint16m4x2_t test_vluxseg2ei8_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, const int16_t *rs1, vuint8m2_t rs2, size_t vl) { +vint16m4x2_t test_vluxseg2ei8_v_i16m4x2_mu(vbool4_t vm, vint16m4x2_t vd, + const int16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i16m4x2_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x2_t test_vluxseg2ei8_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x2_t test_vluxseg2ei8_v_i32mf2x2_mu(vbool64_t vm, vint32mf2x2_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_i32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m1x2_t test_vluxseg2ei8_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x2_t test_vluxseg2ei8_v_i32m1x2_mu(vbool32_t vm, vint32m1x2_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i32m1x2_mu(vm, vd, rs1, rs2, vl); } -vint32m2x2_t test_vluxseg2ei8_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x2_t test_vluxseg2ei8_v_i32m2x2_mu(vbool16_t vm, vint32m2x2_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i32m2x2_mu(vm, vd, rs1, rs2, vl); } -vint32m4x2_t test_vluxseg2ei8_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, const int32_t *rs1, vuint8m1_t rs2, size_t vl) { +vint32m4x2_t test_vluxseg2ei8_v_i32m4x2_mu(vbool8_t vm, vint32m4x2_t vd, + const int32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i32m4x2_mu(vm, vd, rs1, rs2, vl); } -vint64m1x2_t test_vluxseg2ei8_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x2_t test_vluxseg2ei8_v_i64m1x2_mu(vbool64_t vm, vint64m1x2_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i64m1x2_mu(vm, vd, rs1, rs2, vl); } -vint64m2x2_t test_vluxseg2ei8_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x2_t test_vluxseg2ei8_v_i64m2x2_mu(vbool32_t vm, vint64m2x2_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i64m2x2_mu(vm, vd, rs1, rs2, vl); } -vint64m4x2_t test_vluxseg2ei8_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, const int64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint64m4x2_t test_vluxseg2ei8_v_i64m4x2_mu(vbool16_t vm, vint64m4x2_t vd, + const int64_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_i64m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x2_t test_vluxseg2ei8_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x2_t test_vluxseg2ei8_v_u8mf8x2_mu(vbool64_t vm, vuint8mf8x2_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8mf8x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x2_t test_vluxseg2ei8_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x2_t test_vluxseg2ei8_v_u8mf4x2_mu(vbool32_t vm, vuint8mf4x2_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x2_t test_vluxseg2ei8_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x2_t test_vluxseg2ei8_v_u8mf2x2_mu(vbool16_t vm, vuint8mf2x2_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x2_t test_vluxseg2ei8_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x2_t test_vluxseg2ei8_v_u8m1x2_mu(vbool8_t vm, vuint8m1x2_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x2_t test_vluxseg2ei8_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x2_t test_vluxseg2ei8_v_u8m2x2_mu(vbool4_t vm, vuint8m2x2_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint8m4x2_t test_vluxseg2ei8_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, const uint8_t *rs1, vuint8m4_t rs2, size_t vl) { +vuint8m4x2_t test_vluxseg2ei8_v_u8m4x2_mu(vbool2_t vm, vuint8m4x2_t vd, + const uint8_t *rs1, vuint8m4_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u8m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x2_t test_vluxseg2ei8_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x2_t test_vluxseg2ei8_v_u16mf4x2_mu(vbool64_t vm, vuint16mf4x2_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x2_t test_vluxseg2ei8_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x2_t test_vluxseg2ei8_v_u16mf2x2_mu(vbool32_t vm, vuint16mf2x2_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x2_t test_vluxseg2ei8_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x2_t test_vluxseg2ei8_v_u16m1x2_mu(vbool16_t vm, vuint16m1x2_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u16m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x2_t test_vluxseg2ei8_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x2_t test_vluxseg2ei8_v_u16m2x2_mu(vbool8_t vm, vuint16m2x2_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u16m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint16m4x2_t test_vluxseg2ei8_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, const uint16_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint16m4x2_t test_vluxseg2ei8_v_u16m4x2_mu(vbool4_t vm, vuint16m4x2_t vd, + const uint16_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u16m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x2_t test_vluxseg2ei8_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x2_t test_vluxseg2ei8_v_u32mf2x2_mu(vbool64_t vm, vuint32mf2x2_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32mf2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x2_t test_vluxseg2ei8_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x2_t test_vluxseg2ei8_v_u32m1x2_mu(vbool32_t vm, vuint32m1x2_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x2_t test_vluxseg2ei8_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x2_t test_vluxseg2ei8_v_u32m2x2_mu(vbool16_t vm, vuint32m2x2_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u32m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint32m4x2_t test_vluxseg2ei8_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, const uint32_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint32m4x2_t test_vluxseg2ei8_v_u32m4x2_mu(vbool8_t vm, vuint32m4x2_t vd, + const uint32_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg2ei8_v_u32m4x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x2_t test_vluxseg2ei8_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x2_t test_vluxseg2ei8_v_u64m1x2_mu(vbool64_t vm, vuint64m1x2_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m1x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x2_t test_vluxseg2ei8_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x2_t test_vluxseg2ei8_v_u64m2x2_mu(vbool32_t vm, vuint64m2x2_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m2x2_mu(vm, vd, rs1, rs2, vl); } -vuint64m4x2_t test_vluxseg2ei8_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, const uint64_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint64m4x2_t test_vluxseg2ei8_v_u64m4x2_mu(vbool16_t vm, vuint64m4x2_t vd, + const uint64_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei8_v_u64m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c index b263fa23c..b8fa21287 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c @@ -6,594 +6,889 @@ #include -vfloat16mf4x3_t test_vluxseg3ei16_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei16_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei16_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei16_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei16_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei16_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei16_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei16_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei16_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei16_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei16_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei16_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei16_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei16_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei16_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei16_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei16_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei16_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei16_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei16_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei16_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei16_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei16_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei16_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei16_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei16_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei16_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei16_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8m2x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei16_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei16_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei16_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei16_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei16_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei16_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei16_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei16_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei16_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei16_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei16_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei16_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei16_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei16_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei16_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei16_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei16_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei16_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei16_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei16_v_u8mf8x3_tu(vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei16_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei16_v_u8mf4x3_tu(vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei16_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei16_v_u8mf2x3_tu(vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei16_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei16_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei16_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei16_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8m2x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei16_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei16_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei16_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei16_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei16_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei16_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei16_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei16_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei16_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei16_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei16_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei16_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei16_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei16_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei16_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei16_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei16_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei16_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei16_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei16_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei16_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei16_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei16_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei16_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei16_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei16_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei16_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei16_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei16_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei16_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei16_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei16_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei16_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei16_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei16_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei16_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei16_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei16_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei16_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei16_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei16_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei16_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei16_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei16_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei16_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei16_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8m2x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei16_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei16_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei16_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei16_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei16_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei16_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei16_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei16_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei16_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei16_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei16_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei16_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei16_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei16_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei16_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei16_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei16_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei16_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei16_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei16_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei16_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei16_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei16_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei16_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei16_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei16_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei16_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei16_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_u8m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei16_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei16_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei16_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei16_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei16_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei16_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei16_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei16_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei16_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei16_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei16_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei16_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei16_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei16_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei16_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei16_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei16_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei16_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei16_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei16_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei16_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei16_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei16_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei16_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei16_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei16_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei16_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei16_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei16_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei16_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei16_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei16_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei16_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei16_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei16_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei16_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei16_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei16_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei16_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei16_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei16_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei16_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei16_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei16_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei16_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei16_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei16_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei16_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei16_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei16_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei16_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei16_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei16_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei16_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei16_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei16_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei16_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei16_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei16_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei16_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei16_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei16_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei16_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei16_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei16_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei16_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei16_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei16_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei16_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei16_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei16_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei16_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei16_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei16_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei16_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei16_v_u16mf4x3_tumu(vbool64_t vm, + vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei16_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei16_v_u16mf2x3_tumu(vbool32_t vm, + vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei16_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei16_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei16_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei16_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei16_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei16_v_u32mf2x3_tumu(vbool64_t vm, + vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei16_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei16_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei16_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei16_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei16_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei16_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei16_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei16_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei16_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei16_v_f16mf4x3_mu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei16_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei16_v_f16mf2x3_mu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei16_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei16_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei16_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei16_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei16_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei16_v_f32mf2x3_mu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei16_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei16_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei16_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei16_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei16_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei16_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei16_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei16_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei16_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei16_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei16_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei16_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei16_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei16_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei16_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei16_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei16_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei16_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i8m2x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei16_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei16_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei16_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei16_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei16_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei16_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei16_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei16_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei16_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei16_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei16_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei16_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei16_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei16_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei16_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei16_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei16_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei16_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei16_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei16_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei16_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei16_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei16_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei16_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei16_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei16_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei16_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei16_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_u8m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei16_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei16_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei16_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei16_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei16_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei16_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei16_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei16_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei16_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei16_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei16_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei16_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei16_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei16_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei16_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei16_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei16_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei16_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c index 6a661a4fc..e4df9e5af 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c @@ -6,594 +6,889 @@ #include -vfloat16mf4x3_t test_vluxseg3ei32_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei32_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei32_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei32_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei32_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei32_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei32_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei32_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei32_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei32_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei32_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei32_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei32_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei32_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei32_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei32_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei32_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei32_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei32_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei32_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei32_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei32_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei32_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei32_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei32_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei32_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei32_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei32_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8m2x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei32_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei32_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei32_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei32_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei32_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei32_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei32_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei32_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei32_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei32_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei32_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei32_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei32_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei32_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei32_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei32_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei32_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei32_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei32_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei32_v_u8mf8x3_tu(vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei32_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei32_v_u8mf4x3_tu(vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei32_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei32_v_u8mf2x3_tu(vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei32_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei32_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei32_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei32_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8m2x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei32_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei32_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei32_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei32_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei32_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei32_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei32_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei32_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei32_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei32_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei32_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei32_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei32_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei32_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei32_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei32_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei32_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei32_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei32_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei32_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei32_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei32_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei32_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei32_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei32_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei32_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei32_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei32_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei32_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei32_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei32_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei32_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei32_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei32_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei32_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei32_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei32_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei32_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei32_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei32_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei32_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei32_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei32_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei32_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei32_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei32_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8m2x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei32_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei32_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei32_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei32_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei32_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei32_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei32_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei32_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei32_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei32_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei32_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei32_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei32_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei32_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei32_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei32_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei32_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei32_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei32_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei32_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei32_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei32_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei32_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei32_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei32_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei32_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei32_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei32_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_u8m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei32_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei32_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei32_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei32_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei32_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei32_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei32_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei32_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei32_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei32_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei32_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei32_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei32_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei32_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei32_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei32_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei32_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei32_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei32_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei32_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei32_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei32_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei32_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei32_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei32_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei32_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei32_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei32_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei32_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei32_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei32_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei32_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei32_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei32_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei32_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei32_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei32_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei32_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei32_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei32_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei32_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei32_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei32_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei32_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei32_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei32_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei32_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei32_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei32_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei32_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei32_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei32_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei32_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei32_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei32_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei32_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei32_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei32_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei32_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei32_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei32_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei32_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei32_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei32_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei32_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei32_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei32_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei32_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei32_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei32_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei32_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei32_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei32_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei32_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei32_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei32_v_u16mf4x3_tumu(vbool64_t vm, + vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei32_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei32_v_u16mf2x3_tumu(vbool32_t vm, + vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei32_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei32_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei32_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei32_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei32_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei32_v_u32mf2x3_tumu(vbool64_t vm, + vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei32_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei32_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei32_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei32_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei32_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei32_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei32_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei32_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei32_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei32_v_f16mf4x3_mu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei32_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei32_v_f16mf2x3_mu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei32_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei32_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei32_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei32_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei32_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei32_v_f32mf2x3_mu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei32_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei32_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei32_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei32_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei32_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei32_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei32_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei32_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei32_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei32_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei32_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei32_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei32_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei32_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei32_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei32_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei32_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei32_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i8m2x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei32_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei32_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei32_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei32_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei32_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei32_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei32_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei32_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei32_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei32_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei32_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei32_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei32_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei32_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei32_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei32_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei32_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei32_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei32_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei32_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei32_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei32_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei32_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei32_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei32_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei32_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei32_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei32_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei32_v_u8m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei32_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei32_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei32_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei32_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei32_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei32_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei32_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei32_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei32_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei32_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei32_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei32_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei32_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei32_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei32_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei32_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei32_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei32_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg3ei32_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c index e54d64b64..df1e65412 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c @@ -6,562 +6,843 @@ #include -vfloat16mf4x3_t test_vluxseg3ei64_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei64_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei64_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei64_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei64_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei64_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei64_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei64_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei64_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei64_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei64_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei64_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei64_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei64_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei64_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei64_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei64_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei64_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei64_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei64_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei64_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei64_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei64_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei64_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei64_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei64_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei64_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei64_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei64_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei64_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei64_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei64_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei64_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei64_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei64_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei64_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei64_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei64_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei64_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei64_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei64_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei64_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei64_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei64_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei64_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei64_v_u8mf8x3_tu(vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei64_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei64_v_u8mf4x3_tu(vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei64_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei64_v_u8mf2x3_tu(vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei64_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei64_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei64_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei64_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei64_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei64_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei64_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei64_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei64_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei64_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei64_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei64_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei64_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei64_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei64_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei64_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei64_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei64_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei64_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei64_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei64_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei64_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei64_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei64_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei64_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei64_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei64_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei64_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei64_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei64_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei64_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei64_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei64_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei64_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei64_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei64_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei64_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei64_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei64_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei64_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei64_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei64_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei64_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei64_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei64_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei64_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei64_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei64_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei64_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei64_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei64_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei64_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei64_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei64_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei64_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei64_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei64_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei64_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei64_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei64_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei64_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei64_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei64_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei64_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei64_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei64_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei64_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei64_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei64_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei64_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei64_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei64_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei64_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei64_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei64_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei64_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei64_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei64_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei64_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei64_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei64_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei64_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei64_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei64_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei64_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei64_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei64_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei64_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei64_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei64_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei64_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei64_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei64_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei64_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei64_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei64_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei64_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei64_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei64_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei64_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei64_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei64_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei64_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei64_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei64_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei64_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei64_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei64_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei64_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei64_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei64_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei64_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei64_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei64_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei64_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei64_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei64_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei64_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei64_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei64_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei64_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei64_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei64_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei64_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei64_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei64_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei64_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei64_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei64_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei64_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei64_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei64_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei64_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei64_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei64_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei64_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei64_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei64_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei64_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei64_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei64_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei64_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei64_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei64_v_u16mf4x3_tumu(vbool64_t vm, + vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei64_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei64_v_u16mf2x3_tumu(vbool32_t vm, + vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei64_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei64_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei64_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei64_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei64_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei64_v_u32mf2x3_tumu(vbool64_t vm, + vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei64_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei64_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei64_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei64_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei64_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei64_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei64_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei64_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei64_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei64_v_f16mf4x3_mu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei64_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei64_v_f16mf2x3_mu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei64_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei64_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei64_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei64_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei64_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei64_v_f32mf2x3_mu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei64_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei64_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei64_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei64_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei64_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei64_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei64_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei64_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei64_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei64_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei64_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei64_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei64_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei64_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei64_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei64_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei64_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei64_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei64_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei64_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei64_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei64_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei64_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei64_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei64_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei64_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei64_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei64_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei64_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei64_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei64_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei64_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei64_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei64_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei64_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei64_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei64_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei64_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei64_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei64_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei64_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei64_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg3ei64_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei64_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei64_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei64_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei64_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei64_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei64_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei64_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei64_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei64_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei64_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei64_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei64_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei64_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei64_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei64_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei64_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei64_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei64_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg3ei64_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c index b0ba569bb..2ac3792b8 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c @@ -6,594 +6,883 @@ #include -vfloat16mf4x3_t test_vluxseg3ei8_v_f16mf4x3_tu(vfloat16mf4x3_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei8_v_f16mf4x3_tu(vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16mf4x3_tu(vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei8_v_f16mf2x3_tu(vfloat16mf2x3_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei8_v_f16mf2x3_tu(vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16mf2x3_tu(vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei8_v_f16m1x3_tu(vfloat16m1x3_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei8_v_f16m1x3_tu(vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16m1x3_tu(vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei8_v_f16m2x3_tu(vfloat16m2x3_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei8_v_f16m2x3_tu(vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16m2x3_tu(vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei8_v_f32mf2x3_tu(vfloat32mf2x3_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei8_v_f32mf2x3_tu(vfloat32mf2x3_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f32mf2x3_tu(vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei8_v_f32m1x3_tu(vfloat32m1x3_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei8_v_f32m1x3_tu(vfloat32m1x3_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f32m1x3_tu(vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei8_v_f32m2x3_tu(vfloat32m2x3_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei8_v_f32m2x3_tu(vfloat32m2x3_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f32m2x3_tu(vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei8_v_f64m1x3_tu(vfloat64m1x3_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei8_v_f64m1x3_tu(vfloat64m1x3_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f64m1x3_tu(vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei8_v_f64m2x3_tu(vfloat64m2x3_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei8_v_f64m2x3_tu(vfloat64m2x3_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f64m2x3_tu(vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei8_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei8_v_i8mf8x3_tu(vint8mf8x3_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i8mf8x3_tu(vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei8_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei8_v_i8mf4x3_tu(vint8mf4x3_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i8mf4x3_tu(vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei8_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei8_v_i8mf2x3_tu(vint8mf2x3_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i8mf2x3_tu(vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei8_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei8_v_i8m1x3_tu(vint8m1x3_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i8m1x3_tu(vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei8_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei8_v_i8m2x3_tu(vint8m2x3_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i8m2x3_tu(vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei8_v_i16mf4x3_tu(vint16mf4x3_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei8_v_i16mf4x3_tu(vint16mf4x3_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16mf4x3_tu(vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei8_v_i16mf2x3_tu(vint16mf2x3_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei8_v_i16mf2x3_tu(vint16mf2x3_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16mf2x3_tu(vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei8_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei8_v_i16m1x3_tu(vint16m1x3_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16m1x3_tu(vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei8_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei8_v_i16m2x3_tu(vint16m2x3_t vd, const int16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16m2x3_tu(vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei8_v_i32mf2x3_tu(vint32mf2x3_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei8_v_i32mf2x3_tu(vint32mf2x3_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i32mf2x3_tu(vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei8_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei8_v_i32m1x3_tu(vint32m1x3_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i32m1x3_tu(vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei8_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei8_v_i32m2x3_tu(vint32m2x3_t vd, const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i32m2x3_tu(vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei8_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei8_v_i64m1x3_tu(vint64m1x3_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i64m1x3_tu(vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei8_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei8_v_i64m2x3_tu(vint64m2x3_t vd, const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i64m2x3_tu(vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei8_v_u8mf8x3_tu(vuint8mf8x3_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei8_v_u8mf8x3_tu(vuint8mf8x3_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8mf8x3_tu(vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei8_v_u8mf4x3_tu(vuint8mf4x3_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei8_v_u8mf4x3_tu(vuint8mf4x3_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8mf4x3_tu(vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei8_v_u8mf2x3_tu(vuint8mf2x3_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei8_v_u8mf2x3_tu(vuint8mf2x3_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8mf2x3_tu(vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei8_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei8_v_u8m1x3_tu(vuint8m1x3_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u8m1x3_tu(vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei8_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei8_v_u8m2x3_tu(vuint8m2x3_t vd, const uint8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u8m2x3_tu(vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei8_v_u16mf4x3_tu(vuint16mf4x3_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei8_v_u16mf4x3_tu(vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16mf4x3_tu(vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei8_v_u16mf2x3_tu(vuint16mf2x3_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei8_v_u16mf2x3_tu(vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16mf2x3_tu(vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei8_v_u16m1x3_tu(vuint16m1x3_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei8_v_u16m1x3_tu(vuint16m1x3_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16m1x3_tu(vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei8_v_u16m2x3_tu(vuint16m2x3_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei8_v_u16m2x3_tu(vuint16m2x3_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u16m2x3_tu(vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei8_v_u32mf2x3_tu(vuint32mf2x3_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei8_v_u32mf2x3_tu(vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32mf2x3_tu(vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei8_v_u32m1x3_tu(vuint32m1x3_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei8_v_u32m1x3_tu(vuint32m1x3_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32m1x3_tu(vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei8_v_u32m2x3_tu(vuint32m2x3_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei8_v_u32m2x3_tu(vuint32m2x3_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32m2x3_tu(vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei8_v_u64m1x3_tu(vuint64m1x3_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei8_v_u64m1x3_tu(vuint64m1x3_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u64m1x3_tu(vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei8_v_u64m2x3_tu(vuint64m2x3_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei8_v_u64m2x3_tu(vuint64m2x3_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u64m2x3_tu(vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei8_v_f16mf4x3_tum(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei8_v_f16mf4x3_tum(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei8_v_f16mf2x3_tum(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei8_v_f16mf2x3_tum(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei8_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei8_v_f16m1x3_tum(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei8_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei8_v_f16m2x3_tum(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei8_v_f32mf2x3_tum(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei8_v_f32mf2x3_tum(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei8_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei8_v_f32m1x3_tum(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f32m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei8_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei8_v_f32m2x3_tum(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f32m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei8_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei8_v_f64m1x3_tum(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f64m1x3_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei8_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei8_v_f64m2x3_tum(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f64m2x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei8_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei8_v_i8mf8x3_tum(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei8_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei8_v_i8mf4x3_tum(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei8_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei8_v_i8mf2x3_tum(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei8_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei8_v_i8m1x3_tum(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8m1x3_tum(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei8_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei8_v_i8m2x3_tum(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8m2x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei8_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei8_v_i16mf4x3_tum(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei8_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei8_v_i16mf2x3_tum(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei8_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei8_v_i16m1x3_tum(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i16m1x3_tum(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei8_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei8_v_i16m2x3_tum(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i16m2x3_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei8_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei8_v_i32mf2x3_tum(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei8_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei8_v_i32m1x3_tum(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i32m1x3_tum(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei8_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei8_v_i32m2x3_tum(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i32m2x3_tum(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei8_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei8_v_i64m1x3_tum(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i64m1x3_tum(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei8_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei8_v_i64m2x3_tum(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i64m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei8_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei8_v_u8mf8x3_tum(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u8mf8x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei8_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei8_v_u8mf4x3_tum(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u8mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei8_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei8_v_u8mf2x3_tum(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u8mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei8_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei8_v_u8m1x3_tum(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei8_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei8_v_u8m2x3_tum(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei8_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei8_v_u16mf4x3_tum(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei8_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei8_v_u16mf2x3_tum(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei8_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei8_v_u16m1x3_tum(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei8_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei8_v_u16m2x3_tum(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei8_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei8_v_u32mf2x3_tum(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32mf2x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei8_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei8_v_u32m1x3_tum(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei8_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei8_v_u32m2x3_tum(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32m2x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei8_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei8_v_u64m1x3_tum(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u64m1x3_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei8_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei8_v_u64m2x3_tum(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u64m2x3_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei8_v_f16mf4x3_tumu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei8_v_f16mf4x3_tumu(vbool64_t vm, + vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei8_v_f16mf2x3_tumu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei8_v_f16mf2x3_tumu(vbool32_t vm, + vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei8_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei8_v_f16m1x3_tumu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei8_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei8_v_f16m2x3_tumu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei8_v_f32mf2x3_tumu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei8_v_f32mf2x3_tumu(vbool64_t vm, + vfloat32mf2x3_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei8_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei8_v_f32m1x3_tumu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei8_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei8_v_f32m2x3_tumu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei8_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei8_v_f64m1x3_tumu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei8_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei8_v_f64m2x3_tumu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei8_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei8_v_i8mf8x3_tumu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei8_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei8_v_i8mf4x3_tumu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei8_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei8_v_i8mf2x3_tumu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei8_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei8_v_i8m1x3_tumu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei8_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei8_v_i8m2x3_tumu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei8_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei8_v_i16mf4x3_tumu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei8_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei8_v_i16mf2x3_tumu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei8_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei8_v_i16m1x3_tumu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei8_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei8_v_i16m2x3_tumu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei8_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei8_v_i32mf2x3_tumu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei8_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei8_v_i32m1x3_tumu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei8_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei8_v_i32m2x3_tumu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei8_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei8_v_i64m1x3_tumu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei8_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei8_v_i64m2x3_tumu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei8_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei8_v_u8mf8x3_tumu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u8mf8x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei8_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei8_v_u8mf4x3_tumu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u8mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei8_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei8_v_u8mf2x3_tumu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u8mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei8_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei8_v_u8m1x3_tumu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei8_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei8_v_u8m2x3_tumu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei8_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei8_v_u16mf4x3_tumu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei8_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei8_v_u16mf2x3_tumu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei8_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei8_v_u16m1x3_tumu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei8_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei8_v_u16m2x3_tumu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei8_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei8_v_u32mf2x3_tumu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei8_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei8_v_u32m1x3_tumu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei8_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei8_v_u32m2x3_tumu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32m2x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei8_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei8_v_u64m1x3_tumu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u64m1x3_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei8_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei8_v_u64m2x3_tumu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u64m2x3_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x3_t test_vluxseg3ei8_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x3_t test_vluxseg3ei8_v_f16mf4x3_mu(vbool64_t vm, vfloat16mf4x3_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x3_t test_vluxseg3ei8_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x3_t test_vluxseg3ei8_v_f16mf2x3_mu(vbool32_t vm, vfloat16mf2x3_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x3_t test_vluxseg3ei8_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x3_t test_vluxseg3ei8_v_f16m1x3_mu(vbool16_t vm, vfloat16m1x3_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x3_t test_vluxseg3ei8_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x3_t test_vluxseg3ei8_v_f16m2x3_mu(vbool8_t vm, vfloat16m2x3_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f16m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x3_t test_vluxseg3ei8_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x3_t test_vluxseg3ei8_v_f32mf2x3_mu(vbool64_t vm, vfloat32mf2x3_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_f32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x3_t test_vluxseg3ei8_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x3_t test_vluxseg3ei8_v_f32m1x3_mu(vbool32_t vm, vfloat32m1x3_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f32m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x3_t test_vluxseg3ei8_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x3_t test_vluxseg3ei8_v_f32m2x3_mu(vbool16_t vm, vfloat32m2x3_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f32m2x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x3_t test_vluxseg3ei8_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x3_t test_vluxseg3ei8_v_f64m1x3_mu(vbool64_t vm, vfloat64m1x3_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f64m1x3_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x3_t test_vluxseg3ei8_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x3_t test_vluxseg3ei8_v_f64m2x3_mu(vbool32_t vm, vfloat64m2x3_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_f64m2x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x3_t test_vluxseg3ei8_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x3_t test_vluxseg3ei8_v_i8mf8x3_mu(vbool64_t vm, vint8mf8x3_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x3_t test_vluxseg3ei8_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x3_t test_vluxseg3ei8_v_i8mf4x3_mu(vbool32_t vm, vint8mf4x3_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x3_t test_vluxseg3ei8_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x3_t test_vluxseg3ei8_v_i8mf2x3_mu(vbool16_t vm, vint8mf2x3_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint8m1x3_t test_vluxseg3ei8_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x3_t test_vluxseg3ei8_v_i8m1x3_mu(vbool8_t vm, vint8m1x3_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8m1x3_mu(vm, vd, rs1, rs2, vl); } -vint8m2x3_t test_vluxseg3ei8_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x3_t test_vluxseg3ei8_v_i8m2x3_mu(vbool4_t vm, vint8m2x3_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i8m2x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x3_t test_vluxseg3ei8_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x3_t test_vluxseg3ei8_v_i16mf4x3_mu(vbool64_t vm, vint16mf4x3_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x3_t test_vluxseg3ei8_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x3_t test_vluxseg3ei8_v_i16mf2x3_mu(vbool32_t vm, vint16mf2x3_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint16m1x3_t test_vluxseg3ei8_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x3_t test_vluxseg3ei8_v_i16m1x3_mu(vbool16_t vm, vint16m1x3_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i16m1x3_mu(vm, vd, rs1, rs2, vl); } -vint16m2x3_t test_vluxseg3ei8_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x3_t test_vluxseg3ei8_v_i16m2x3_mu(vbool8_t vm, vint16m2x3_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i16m2x3_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x3_t test_vluxseg3ei8_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x3_t test_vluxseg3ei8_v_i32mf2x3_mu(vbool64_t vm, vint32mf2x3_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_i32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vint32m1x3_t test_vluxseg3ei8_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x3_t test_vluxseg3ei8_v_i32m1x3_mu(vbool32_t vm, vint32m1x3_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i32m1x3_mu(vm, vd, rs1, rs2, vl); } -vint32m2x3_t test_vluxseg3ei8_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x3_t test_vluxseg3ei8_v_i32m2x3_mu(vbool16_t vm, vint32m2x3_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i32m2x3_mu(vm, vd, rs1, rs2, vl); } -vint64m1x3_t test_vluxseg3ei8_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x3_t test_vluxseg3ei8_v_i64m1x3_mu(vbool64_t vm, vint64m1x3_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i64m1x3_mu(vm, vd, rs1, rs2, vl); } -vint64m2x3_t test_vluxseg3ei8_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x3_t test_vluxseg3ei8_v_i64m2x3_mu(vbool32_t vm, vint64m2x3_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_i64m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x3_t test_vluxseg3ei8_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x3_t test_vluxseg3ei8_v_u8mf8x3_mu(vbool64_t vm, vuint8mf8x3_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8mf8x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x3_t test_vluxseg3ei8_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x3_t test_vluxseg3ei8_v_u8mf4x3_mu(vbool32_t vm, vuint8mf4x3_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x3_t test_vluxseg3ei8_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x3_t test_vluxseg3ei8_v_u8mf2x3_mu(vbool16_t vm, vuint8mf2x3_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x3_t test_vluxseg3ei8_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x3_t test_vluxseg3ei8_v_u8m1x3_mu(vbool8_t vm, vuint8m1x3_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x3_t test_vluxseg3ei8_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x3_t test_vluxseg3ei8_v_u8m2x3_mu(vbool4_t vm, vuint8m2x3_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u8m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x3_t test_vluxseg3ei8_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x3_t test_vluxseg3ei8_v_u16mf4x3_mu(vbool64_t vm, vuint16mf4x3_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x3_t test_vluxseg3ei8_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x3_t test_vluxseg3ei8_v_u16mf2x3_mu(vbool32_t vm, vuint16mf2x3_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x3_t test_vluxseg3ei8_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x3_t test_vluxseg3ei8_v_u16m1x3_mu(vbool16_t vm, vuint16m1x3_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u16m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x3_t test_vluxseg3ei8_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x3_t test_vluxseg3ei8_v_u16m2x3_mu(vbool8_t vm, vuint16m2x3_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg3ei8_v_u16m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x3_t test_vluxseg3ei8_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x3_t test_vluxseg3ei8_v_u32mf2x3_mu(vbool64_t vm, vuint32mf2x3_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32mf2x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x3_t test_vluxseg3ei8_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x3_t test_vluxseg3ei8_v_u32m1x3_mu(vbool32_t vm, vuint32m1x3_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x3_t test_vluxseg3ei8_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x3_t test_vluxseg3ei8_v_u32m2x3_mu(vbool16_t vm, vuint32m2x3_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u32m2x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x3_t test_vluxseg3ei8_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x3_t test_vluxseg3ei8_v_u64m1x3_mu(vbool64_t vm, vuint64m1x3_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u64m1x3_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x3_t test_vluxseg3ei8_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x3_t test_vluxseg3ei8_v_u64m2x3_mu(vbool32_t vm, vuint64m2x3_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei8_v_u64m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c index e4ae644df..d3cb2a848 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c @@ -6,594 +6,889 @@ #include -vfloat16mf4x4_t test_vluxseg4ei16_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei16_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei16_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei16_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei16_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei16_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei16_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei16_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei16_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei16_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei16_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei16_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei16_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei16_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei16_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei16_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei16_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei16_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei16_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei16_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei16_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei16_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei16_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei16_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei16_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei16_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei16_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei16_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8m2x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei16_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei16_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei16_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei16_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei16_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei16_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei16_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei16_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei16_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei16_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei16_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei16_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei16_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei16_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei16_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei16_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei16_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei16_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei16_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei16_v_u8mf8x4_tu(vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei16_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei16_v_u8mf4x4_tu(vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei16_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei16_v_u8mf2x4_tu(vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei16_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei16_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei16_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei16_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8m2x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei16_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei16_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei16_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei16_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei16_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei16_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei16_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei16_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei16_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei16_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei16_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei16_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei16_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei16_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei16_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei16_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei16_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei16_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei16_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei16_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei16_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei16_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei16_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei16_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei16_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei16_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei16_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei16_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei16_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei16_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei16_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei16_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei16_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei16_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei16_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei16_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei16_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei16_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei16_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei16_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei16_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei16_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei16_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei16_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei16_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei16_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8m2x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei16_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei16_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei16_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei16_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei16_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei16_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei16_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei16_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei16_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei16_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei16_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei16_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei16_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei16_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei16_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei16_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei16_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei16_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei16_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei16_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei16_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei16_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei16_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei16_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei16_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei16_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei16_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei16_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_u8m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei16_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei16_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei16_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei16_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei16_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei16_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei16_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei16_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei16_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei16_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei16_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei16_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei16_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei16_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei16_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei16_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei16_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei16_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei16_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei16_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei16_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei16_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei16_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei16_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei16_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei16_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei16_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei16_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei16_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei16_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei16_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei16_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei16_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei16_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei16_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei16_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei16_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei16_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei16_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei16_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei16_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei16_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei16_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei16_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei16_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei16_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei16_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei16_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei16_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei16_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei16_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei16_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei16_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei16_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei16_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei16_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei16_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei16_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei16_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei16_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei16_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei16_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei16_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei16_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei16_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei16_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei16_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei16_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei16_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei16_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei16_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei16_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei16_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei16_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei16_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei16_v_u16mf4x4_tumu(vbool64_t vm, + vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei16_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei16_v_u16mf2x4_tumu(vbool32_t vm, + vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei16_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei16_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei16_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei16_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei16_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei16_v_u32mf2x4_tumu(vbool64_t vm, + vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei16_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei16_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei16_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei16_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei16_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei16_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei16_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei16_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei16_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei16_v_f16mf4x4_mu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei16_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei16_v_f16mf2x4_mu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei16_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei16_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei16_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint16m2_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei16_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei16_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei16_v_f32mf2x4_mu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei16_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei16_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei16_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint16m1_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei16_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei16_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei16_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei16_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei16_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei16_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei16_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei16_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei16_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei16_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei16_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei16_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei16_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei16_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint16m4_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei16_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i8m2x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei16_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei16_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei16_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei16_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei16_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei16_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei16_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint16m2_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei16_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei16_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei16_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei16_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei16_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei16_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint16m1_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei16_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei16_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei16_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei16_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei16_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei16_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei16_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei16_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei16_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei16_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei16_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei16_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei16_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei16_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint16m4_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei16_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_u8m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei16_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei16_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei16_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei16_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei16_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei16_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei16_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei16_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei16_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei16_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei16_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei16_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei16_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei16_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei16_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei16_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei16_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei16_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c index 759ea11af..00e75404a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c @@ -6,594 +6,889 @@ #include -vfloat16mf4x4_t test_vluxseg4ei32_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei32_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei32_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei32_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei32_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei32_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei32_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei32_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei32_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei32_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei32_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei32_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei32_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei32_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei32_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei32_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei32_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei32_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei32_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei32_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei32_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei32_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei32_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei32_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei32_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei32_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei32_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei32_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8m2x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei32_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei32_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei32_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei32_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei32_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei32_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei32_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei32_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei32_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei32_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei32_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei32_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei32_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei32_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei32_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei32_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei32_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei32_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei32_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei32_v_u8mf8x4_tu(vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei32_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei32_v_u8mf4x4_tu(vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei32_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei32_v_u8mf2x4_tu(vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei32_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei32_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei32_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei32_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8m2x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei32_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei32_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei32_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei32_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei32_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei32_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei32_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei32_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei32_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei32_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei32_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei32_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei32_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei32_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei32_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei32_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei32_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei32_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei32_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei32_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei32_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei32_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei32_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei32_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei32_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei32_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei32_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei32_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei32_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei32_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei32_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei32_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei32_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei32_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei32_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei32_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei32_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei32_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei32_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei32_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei32_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei32_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei32_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei32_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei32_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei32_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8m2x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei32_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei32_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei32_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei32_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei32_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei32_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei32_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei32_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei32_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei32_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei32_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei32_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei32_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei32_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei32_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei32_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei32_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei32_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei32_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei32_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei32_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei32_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei32_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei32_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei32_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei32_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei32_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei32_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_u8m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei32_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei32_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei32_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei32_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei32_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei32_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei32_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei32_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei32_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei32_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei32_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei32_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei32_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei32_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei32_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei32_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei32_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei32_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei32_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei32_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei32_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei32_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei32_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei32_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei32_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei32_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei32_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei32_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei32_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei32_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei32_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei32_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei32_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei32_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei32_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei32_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei32_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei32_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei32_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei32_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei32_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei32_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei32_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei32_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei32_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei32_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei32_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei32_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei32_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei32_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei32_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei32_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei32_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei32_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei32_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei32_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei32_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei32_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei32_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei32_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei32_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei32_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei32_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei32_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei32_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei32_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei32_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei32_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei32_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei32_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei32_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei32_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei32_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei32_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, + vuint32m8_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei32_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei32_v_u16mf4x4_tumu(vbool64_t vm, + vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei32_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei32_v_u16mf2x4_tumu(vbool32_t vm, + vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei32_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei32_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei32_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei32_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei32_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei32_v_u32mf2x4_tumu(vbool64_t vm, + vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei32_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei32_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei32_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei32_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei32_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei32_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei32_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei32_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei32_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei32_v_f16mf4x4_mu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei32_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei32_v_f16mf2x4_mu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei32_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei32_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei32_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint32m4_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei32_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei32_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei32_v_f32mf2x4_mu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei32_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei32_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei32_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint32m2_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei32_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei32_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei32_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei32_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint32m1_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei32_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei32_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei32_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei32_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei32_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei32_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei32_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei32_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei32_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei32_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint32m8_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei32_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i8m2x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei32_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei32_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei32_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei32_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei32_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei32_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei32_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint32m4_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei32_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei32_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei32_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei32_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei32_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei32_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint32m2_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei32_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei32_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei32_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei32_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint32m1_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei32_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei32_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei32_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei32_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei32_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei32_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei32_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei32_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei32_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei32_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint32m8_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei32_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint32m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei32_v_u8m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei32_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei32_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei32_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei32_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei32_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei32_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei32_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei32_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei32_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei32_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei32_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei32_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei32_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei32_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei32_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei32_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei32_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei32_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg4ei32_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c index 548c26d0f..845186364 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c @@ -6,562 +6,843 @@ #include -vfloat16mf4x4_t test_vluxseg4ei64_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei64_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei64_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei64_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei64_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei64_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei64_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei64_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei64_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei64_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei64_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei64_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei64_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei64_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei64_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei64_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei64_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei64_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei64_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei64_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei64_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei64_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei64_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei64_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei64_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei64_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei64_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei64_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei64_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei64_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei64_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei64_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei64_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei64_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei64_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei64_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei64_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei64_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei64_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei64_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei64_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei64_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei64_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei64_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei64_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei64_v_u8mf8x4_tu(vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei64_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei64_v_u8mf4x4_tu(vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei64_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei64_v_u8mf2x4_tu(vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei64_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei64_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei64_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei64_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei64_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei64_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei64_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei64_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei64_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei64_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei64_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei64_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei64_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei64_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei64_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei64_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei64_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei64_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei64_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei64_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei64_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei64_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei64_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei64_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei64_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei64_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei64_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei64_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei64_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei64_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei64_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei64_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei64_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei64_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei64_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei64_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei64_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei64_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei64_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei64_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei64_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei64_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei64_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei64_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei64_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei64_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei64_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei64_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei64_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei64_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei64_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei64_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei64_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei64_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei64_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei64_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei64_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei64_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei64_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei64_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei64_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei64_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei64_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei64_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei64_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei64_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei64_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei64_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei64_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei64_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei64_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei64_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei64_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei64_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei64_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei64_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei64_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei64_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei64_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei64_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei64_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei64_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei64_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei64_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei64_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei64_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei64_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei64_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei64_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei64_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei64_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei64_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei64_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei64_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei64_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei64_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei64_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei64_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei64_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei64_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei64_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei64_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei64_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei64_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei64_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei64_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei64_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei64_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei64_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei64_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei64_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei64_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei64_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei64_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei64_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei64_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei64_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei64_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei64_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei64_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei64_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei64_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei64_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei64_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei64_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei64_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei64_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei64_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei64_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei64_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei64_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei64_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei64_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei64_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei64_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei64_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei64_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei64_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei64_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei64_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei64_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei64_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei64_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei64_v_u16mf4x4_tumu(vbool64_t vm, + vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei64_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei64_v_u16mf2x4_tumu(vbool32_t vm, + vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei64_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei64_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei64_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei64_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei64_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei64_v_u32mf2x4_tumu(vbool64_t vm, + vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei64_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei64_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei64_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei64_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei64_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei64_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei64_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei64_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei64_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei64_v_f16mf4x4_mu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei64_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei64_v_f16mf2x4_mu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei64_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei64_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei64_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint64m8_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei64_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei64_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei64_v_f32mf2x4_mu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei64_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei64_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei64_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint64m4_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei64_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei64_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei64_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei64_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint64m2_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei64_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei64_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei64_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei64_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei64_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei64_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei64_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei64_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei64_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei64_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei64_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei64_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei64_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei64_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei64_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei64_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint64m8_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei64_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei64_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei64_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei64_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei64_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei64_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint64m4_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei64_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei64_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei64_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei64_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint64m2_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei64_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei64_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei64_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei64_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei64_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei64_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei64_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei64_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei64_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg4ei64_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei64_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei64_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei64_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei64_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei64_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei64_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei64_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei64_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei64_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei64_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei64_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei64_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei64_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei64_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei64_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei64_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei64_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei64_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg4ei64_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c index eccb6de56..d6d67ff8d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c @@ -6,594 +6,883 @@ #include -vfloat16mf4x4_t test_vluxseg4ei8_v_f16mf4x4_tu(vfloat16mf4x4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei8_v_f16mf4x4_tu(vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16mf4x4_tu(vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei8_v_f16mf2x4_tu(vfloat16mf2x4_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei8_v_f16mf2x4_tu(vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16mf2x4_tu(vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei8_v_f16m1x4_tu(vfloat16m1x4_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei8_v_f16m1x4_tu(vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16m1x4_tu(vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei8_v_f16m2x4_tu(vfloat16m2x4_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei8_v_f16m2x4_tu(vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16m2x4_tu(vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei8_v_f32mf2x4_tu(vfloat32mf2x4_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei8_v_f32mf2x4_tu(vfloat32mf2x4_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f32mf2x4_tu(vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei8_v_f32m1x4_tu(vfloat32m1x4_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei8_v_f32m1x4_tu(vfloat32m1x4_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f32m1x4_tu(vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei8_v_f32m2x4_tu(vfloat32m2x4_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei8_v_f32m2x4_tu(vfloat32m2x4_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f32m2x4_tu(vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei8_v_f64m1x4_tu(vfloat64m1x4_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei8_v_f64m1x4_tu(vfloat64m1x4_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f64m1x4_tu(vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei8_v_f64m2x4_tu(vfloat64m2x4_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei8_v_f64m2x4_tu(vfloat64m2x4_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f64m2x4_tu(vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei8_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei8_v_i8mf8x4_tu(vint8mf8x4_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i8mf8x4_tu(vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei8_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei8_v_i8mf4x4_tu(vint8mf4x4_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i8mf4x4_tu(vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei8_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei8_v_i8mf2x4_tu(vint8mf2x4_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i8mf2x4_tu(vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei8_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei8_v_i8m1x4_tu(vint8m1x4_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i8m1x4_tu(vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei8_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei8_v_i8m2x4_tu(vint8m2x4_t vd, const int8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i8m2x4_tu(vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei8_v_i16mf4x4_tu(vint16mf4x4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei8_v_i16mf4x4_tu(vint16mf4x4_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16mf4x4_tu(vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei8_v_i16mf2x4_tu(vint16mf2x4_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei8_v_i16mf2x4_tu(vint16mf2x4_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16mf2x4_tu(vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei8_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei8_v_i16m1x4_tu(vint16m1x4_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16m1x4_tu(vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei8_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei8_v_i16m2x4_tu(vint16m2x4_t vd, const int16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16m2x4_tu(vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei8_v_i32mf2x4_tu(vint32mf2x4_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei8_v_i32mf2x4_tu(vint32mf2x4_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i32mf2x4_tu(vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei8_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei8_v_i32m1x4_tu(vint32m1x4_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i32m1x4_tu(vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei8_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei8_v_i32m2x4_tu(vint32m2x4_t vd, const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i32m2x4_tu(vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei8_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei8_v_i64m1x4_tu(vint64m1x4_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i64m1x4_tu(vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei8_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei8_v_i64m2x4_tu(vint64m2x4_t vd, const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i64m2x4_tu(vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei8_v_u8mf8x4_tu(vuint8mf8x4_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei8_v_u8mf8x4_tu(vuint8mf8x4_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8mf8x4_tu(vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei8_v_u8mf4x4_tu(vuint8mf4x4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei8_v_u8mf4x4_tu(vuint8mf4x4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8mf4x4_tu(vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei8_v_u8mf2x4_tu(vuint8mf2x4_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei8_v_u8mf2x4_tu(vuint8mf2x4_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8mf2x4_tu(vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei8_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei8_v_u8m1x4_tu(vuint8m1x4_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u8m1x4_tu(vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei8_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei8_v_u8m2x4_tu(vuint8m2x4_t vd, const uint8_t *rs1, + vuint8m2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u8m2x4_tu(vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei8_v_u16mf4x4_tu(vuint16mf4x4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei8_v_u16mf4x4_tu(vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16mf4x4_tu(vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei8_v_u16mf2x4_tu(vuint16mf2x4_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei8_v_u16mf2x4_tu(vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16mf2x4_tu(vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei8_v_u16m1x4_tu(vuint16m1x4_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei8_v_u16m1x4_tu(vuint16m1x4_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16m1x4_tu(vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei8_v_u16m2x4_tu(vuint16m2x4_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei8_v_u16m2x4_tu(vuint16m2x4_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u16m2x4_tu(vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei8_v_u32mf2x4_tu(vuint32mf2x4_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei8_v_u32mf2x4_tu(vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32mf2x4_tu(vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei8_v_u32m1x4_tu(vuint32m1x4_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei8_v_u32m1x4_tu(vuint32m1x4_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32m1x4_tu(vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei8_v_u32m2x4_tu(vuint32m2x4_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei8_v_u32m2x4_tu(vuint32m2x4_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32m2x4_tu(vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei8_v_u64m1x4_tu(vuint64m1x4_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei8_v_u64m1x4_tu(vuint64m1x4_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u64m1x4_tu(vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei8_v_u64m2x4_tu(vuint64m2x4_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei8_v_u64m2x4_tu(vuint64m2x4_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u64m2x4_tu(vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei8_v_f16mf4x4_tum(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei8_v_f16mf4x4_tum(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei8_v_f16mf2x4_tum(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei8_v_f16mf2x4_tum(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei8_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei8_v_f16m1x4_tum(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei8_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei8_v_f16m2x4_tum(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei8_v_f32mf2x4_tum(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei8_v_f32mf2x4_tum(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei8_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei8_v_f32m1x4_tum(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f32m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei8_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei8_v_f32m2x4_tum(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f32m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei8_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei8_v_f64m1x4_tum(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f64m1x4_tum(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei8_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei8_v_f64m2x4_tum(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f64m2x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei8_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei8_v_i8mf8x4_tum(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei8_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei8_v_i8mf4x4_tum(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei8_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei8_v_i8mf2x4_tum(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei8_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei8_v_i8m1x4_tum(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8m1x4_tum(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei8_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei8_v_i8m2x4_tum(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8m2x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei8_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei8_v_i16mf4x4_tum(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei8_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei8_v_i16mf2x4_tum(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei8_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei8_v_i16m1x4_tum(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i16m1x4_tum(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei8_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei8_v_i16m2x4_tum(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i16m2x4_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei8_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei8_v_i32mf2x4_tum(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei8_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei8_v_i32m1x4_tum(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i32m1x4_tum(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei8_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei8_v_i32m2x4_tum(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i32m2x4_tum(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei8_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei8_v_i64m1x4_tum(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i64m1x4_tum(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei8_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei8_v_i64m2x4_tum(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i64m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei8_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei8_v_u8mf8x4_tum(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u8mf8x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei8_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei8_v_u8mf4x4_tum(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u8mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei8_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei8_v_u8mf2x4_tum(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u8mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei8_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei8_v_u8m1x4_tum(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei8_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei8_v_u8m2x4_tum(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei8_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei8_v_u16mf4x4_tum(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei8_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei8_v_u16mf2x4_tum(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei8_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei8_v_u16m1x4_tum(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei8_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei8_v_u16m2x4_tum(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei8_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei8_v_u32mf2x4_tum(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32mf2x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei8_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei8_v_u32m1x4_tum(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei8_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei8_v_u32m2x4_tum(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32m2x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei8_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei8_v_u64m1x4_tum(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u64m1x4_tum(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei8_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei8_v_u64m2x4_tum(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u64m2x4_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei8_v_f16mf4x4_tumu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei8_v_f16mf4x4_tumu(vbool64_t vm, + vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei8_v_f16mf2x4_tumu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei8_v_f16mf2x4_tumu(vbool32_t vm, + vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei8_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei8_v_f16m1x4_tumu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei8_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei8_v_f16m2x4_tumu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei8_v_f32mf2x4_tumu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei8_v_f32mf2x4_tumu(vbool64_t vm, + vfloat32mf2x4_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei8_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei8_v_f32m1x4_tumu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei8_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei8_v_f32m2x4_tumu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei8_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei8_v_f64m1x4_tumu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei8_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei8_v_f64m2x4_tumu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei8_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei8_v_i8mf8x4_tumu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei8_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei8_v_i8mf4x4_tumu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei8_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei8_v_i8mf2x4_tumu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei8_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei8_v_i8m1x4_tumu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei8_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei8_v_i8m2x4_tumu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei8_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei8_v_i16mf4x4_tumu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei8_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei8_v_i16mf2x4_tumu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei8_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei8_v_i16m1x4_tumu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei8_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei8_v_i16m2x4_tumu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei8_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei8_v_i32mf2x4_tumu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei8_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei8_v_i32m1x4_tumu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei8_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei8_v_i32m2x4_tumu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei8_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei8_v_i64m1x4_tumu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei8_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei8_v_i64m2x4_tumu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei8_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei8_v_u8mf8x4_tumu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u8mf8x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei8_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei8_v_u8mf4x4_tumu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u8mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei8_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei8_v_u8mf2x4_tumu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u8mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei8_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei8_v_u8m1x4_tumu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei8_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei8_v_u8m2x4_tumu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei8_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei8_v_u16mf4x4_tumu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei8_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei8_v_u16mf2x4_tumu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei8_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei8_v_u16m1x4_tumu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei8_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei8_v_u16m2x4_tumu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei8_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei8_v_u32mf2x4_tumu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei8_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei8_v_u32m1x4_tumu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei8_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei8_v_u32m2x4_tumu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32m2x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei8_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei8_v_u64m1x4_tumu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u64m1x4_tumu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei8_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei8_v_u64m2x4_tumu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u64m2x4_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x4_t test_vluxseg4ei8_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x4_t test_vluxseg4ei8_v_f16mf4x4_mu(vbool64_t vm, vfloat16mf4x4_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x4_t test_vluxseg4ei8_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x4_t test_vluxseg4ei8_v_f16mf2x4_mu(vbool32_t vm, vfloat16mf2x4_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x4_t test_vluxseg4ei8_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x4_t test_vluxseg4ei8_v_f16m1x4_mu(vbool16_t vm, vfloat16m1x4_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat16m2x4_t test_vluxseg4ei8_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, const _Float16 *rs1, vuint8m1_t rs2, size_t vl) { +vfloat16m2x4_t test_vluxseg4ei8_v_f16m2x4_mu(vbool8_t vm, vfloat16m2x4_t vd, + const _Float16 *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f16m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x4_t test_vluxseg4ei8_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x4_t test_vluxseg4ei8_v_f32mf2x4_mu(vbool64_t vm, vfloat32mf2x4_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_f32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x4_t test_vluxseg4ei8_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x4_t test_vluxseg4ei8_v_f32m1x4_mu(vbool32_t vm, vfloat32m1x4_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f32m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat32m2x4_t test_vluxseg4ei8_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, const float *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat32m2x4_t test_vluxseg4ei8_v_f32m2x4_mu(vbool16_t vm, vfloat32m2x4_t vd, + const float *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f32m2x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x4_t test_vluxseg4ei8_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x4_t test_vluxseg4ei8_v_f64m1x4_mu(vbool64_t vm, vfloat64m1x4_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f64m1x4_mu(vm, vd, rs1, rs2, vl); } -vfloat64m2x4_t test_vluxseg4ei8_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, const double *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat64m2x4_t test_vluxseg4ei8_v_f64m2x4_mu(vbool32_t vm, vfloat64m2x4_t vd, + const double *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_f64m2x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x4_t test_vluxseg4ei8_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x4_t test_vluxseg4ei8_v_i8mf8x4_mu(vbool64_t vm, vint8mf8x4_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x4_t test_vluxseg4ei8_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x4_t test_vluxseg4ei8_v_i8mf4x4_mu(vbool32_t vm, vint8mf4x4_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x4_t test_vluxseg4ei8_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x4_t test_vluxseg4ei8_v_i8mf2x4_mu(vbool16_t vm, vint8mf2x4_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint8m1x4_t test_vluxseg4ei8_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x4_t test_vluxseg4ei8_v_i8m1x4_mu(vbool8_t vm, vint8m1x4_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8m1x4_mu(vm, vd, rs1, rs2, vl); } -vint8m2x4_t test_vluxseg4ei8_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, const int8_t *rs1, vuint8m2_t rs2, size_t vl) { +vint8m2x4_t test_vluxseg4ei8_v_i8m2x4_mu(vbool4_t vm, vint8m2x4_t vd, + const int8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i8m2x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x4_t test_vluxseg4ei8_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x4_t test_vluxseg4ei8_v_i16mf4x4_mu(vbool64_t vm, vint16mf4x4_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x4_t test_vluxseg4ei8_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x4_t test_vluxseg4ei8_v_i16mf2x4_mu(vbool32_t vm, vint16mf2x4_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint16m1x4_t test_vluxseg4ei8_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x4_t test_vluxseg4ei8_v_i16m1x4_mu(vbool16_t vm, vint16m1x4_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i16m1x4_mu(vm, vd, rs1, rs2, vl); } -vint16m2x4_t test_vluxseg4ei8_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, const int16_t *rs1, vuint8m1_t rs2, size_t vl) { +vint16m2x4_t test_vluxseg4ei8_v_i16m2x4_mu(vbool8_t vm, vint16m2x4_t vd, + const int16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i16m2x4_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x4_t test_vluxseg4ei8_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x4_t test_vluxseg4ei8_v_i32mf2x4_mu(vbool64_t vm, vint32mf2x4_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_i32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vint32m1x4_t test_vluxseg4ei8_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x4_t test_vluxseg4ei8_v_i32m1x4_mu(vbool32_t vm, vint32m1x4_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i32m1x4_mu(vm, vd, rs1, rs2, vl); } -vint32m2x4_t test_vluxseg4ei8_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, const int32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint32m2x4_t test_vluxseg4ei8_v_i32m2x4_mu(vbool16_t vm, vint32m2x4_t vd, + const int32_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i32m2x4_mu(vm, vd, rs1, rs2, vl); } -vint64m1x4_t test_vluxseg4ei8_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x4_t test_vluxseg4ei8_v_i64m1x4_mu(vbool64_t vm, vint64m1x4_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i64m1x4_mu(vm, vd, rs1, rs2, vl); } -vint64m2x4_t test_vluxseg4ei8_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, const int64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint64m2x4_t test_vluxseg4ei8_v_i64m2x4_mu(vbool32_t vm, vint64m2x4_t vd, + const int64_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_i64m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x4_t test_vluxseg4ei8_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x4_t test_vluxseg4ei8_v_u8mf8x4_mu(vbool64_t vm, vuint8mf8x4_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8mf8x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x4_t test_vluxseg4ei8_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x4_t test_vluxseg4ei8_v_u8mf4x4_mu(vbool32_t vm, vuint8mf4x4_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x4_t test_vluxseg4ei8_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x4_t test_vluxseg4ei8_v_u8mf2x4_mu(vbool16_t vm, vuint8mf2x4_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x4_t test_vluxseg4ei8_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x4_t test_vluxseg4ei8_v_u8m1x4_mu(vbool8_t vm, vuint8m1x4_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint8m2x4_t test_vluxseg4ei8_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, const uint8_t *rs1, vuint8m2_t rs2, size_t vl) { +vuint8m2x4_t test_vluxseg4ei8_v_u8m2x4_mu(vbool4_t vm, vuint8m2x4_t vd, + const uint8_t *rs1, vuint8m2_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u8m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x4_t test_vluxseg4ei8_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x4_t test_vluxseg4ei8_v_u16mf4x4_mu(vbool64_t vm, vuint16mf4x4_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x4_t test_vluxseg4ei8_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x4_t test_vluxseg4ei8_v_u16mf2x4_mu(vbool32_t vm, vuint16mf2x4_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x4_t test_vluxseg4ei8_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x4_t test_vluxseg4ei8_v_u16m1x4_mu(vbool16_t vm, vuint16m1x4_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u16m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint16m2x4_t test_vluxseg4ei8_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, const uint16_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint16m2x4_t test_vluxseg4ei8_v_u16m2x4_mu(vbool8_t vm, vuint16m2x4_t vd, + const uint16_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg4ei8_v_u16m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x4_t test_vluxseg4ei8_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x4_t test_vluxseg4ei8_v_u32mf2x4_mu(vbool64_t vm, vuint32mf2x4_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32mf2x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x4_t test_vluxseg4ei8_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x4_t test_vluxseg4ei8_v_u32m1x4_mu(vbool32_t vm, vuint32m1x4_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint32m2x4_t test_vluxseg4ei8_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, const uint32_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint32m2x4_t test_vluxseg4ei8_v_u32m2x4_mu(vbool16_t vm, vuint32m2x4_t vd, + const uint32_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u32m2x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x4_t test_vluxseg4ei8_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x4_t test_vluxseg4ei8_v_u64m1x4_mu(vbool64_t vm, vuint64m1x4_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u64m1x4_mu(vm, vd, rs1, rs2, vl); } -vuint64m2x4_t test_vluxseg4ei8_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, const uint64_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint64m2x4_t test_vluxseg4ei8_v_u64m2x4_mu(vbool32_t vm, vuint64m2x4_t vd, + const uint64_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei8_v_u64m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c index d87bb6883..79752eda2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x5_t test_vluxseg5ei16_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei16_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei16_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei16_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei16_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei16_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei16_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei16_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei16_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei16_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei16_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei16_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei16_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei16_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei16_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei16_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei16_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei16_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei16_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei16_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei16_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei16_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei16_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei16_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei16_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei16_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei16_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei16_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei16_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei16_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei16_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei16_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei16_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei16_v_u8mf8x5_tu(vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei16_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei16_v_u8mf4x5_tu(vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei16_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei16_v_u8mf2x5_tu(vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei16_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei16_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei16_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei16_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei16_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei16_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei16_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei16_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei16_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei16_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei16_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei16_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei16_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei16_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei16_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei16_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei16_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei16_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei16_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei16_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei16_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei16_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei16_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei16_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei16_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei16_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei16_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei16_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei16_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei16_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei16_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei16_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei16_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei16_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei16_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei16_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei16_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei16_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei16_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei16_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei16_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei16_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei16_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei16_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei16_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei16_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei16_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei16_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei16_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei16_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei16_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei16_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei16_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei16_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei16_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei16_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei16_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei16_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei16_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei16_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei16_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei16_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei16_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei16_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei16_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei16_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei16_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei16_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei16_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei16_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei16_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei16_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei16_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei16_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei16_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei16_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei16_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei16_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei16_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei16_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei16_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei16_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei16_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei16_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei16_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei16_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei16_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei16_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei16_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei16_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei16_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei16_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei16_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei16_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei16_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei16_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei16_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei16_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei16_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei16_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei16_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei16_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei16_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei16_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei16_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei16_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei16_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei16_v_u16mf4x5_tumu(vbool64_t vm, + vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei16_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei16_v_u16mf2x5_tumu(vbool32_t vm, + vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei16_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei16_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei16_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei16_v_u32mf2x5_tumu(vbool64_t vm, + vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei16_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei16_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei16_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei16_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei16_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei16_v_f16mf4x5_mu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei16_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei16_v_f16mf2x5_mu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei16_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei16_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei16_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei16_v_f32mf2x5_mu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei16_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei16_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei16_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei16_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei16_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei16_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei16_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei16_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei16_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei16_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei16_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei16_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei16_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei16_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei16_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei16_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei16_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei16_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei16_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei16_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei16_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei16_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei16_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei16_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei16_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei16_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei16_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei16_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei16_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei16_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei16_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei16_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei16_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei16_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei16_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei16_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei16_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei16_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei16_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei16_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei16_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei16_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei16_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei16_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c index 9418dd1eb..870f07882 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x5_t test_vluxseg5ei32_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei32_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei32_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei32_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei32_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei32_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei32_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei32_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei32_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei32_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei32_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei32_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei32_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei32_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei32_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei32_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei32_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei32_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei32_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei32_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei32_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei32_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei32_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei32_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei32_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei32_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei32_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei32_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei32_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei32_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei32_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei32_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei32_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei32_v_u8mf8x5_tu(vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei32_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei32_v_u8mf4x5_tu(vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei32_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei32_v_u8mf2x5_tu(vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei32_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei32_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei32_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei32_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei32_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei32_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei32_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei32_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei32_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei32_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei32_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei32_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei32_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei32_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei32_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei32_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei32_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei32_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei32_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei32_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei32_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei32_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei32_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei32_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei32_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei32_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei32_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei32_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei32_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei32_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei32_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei32_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei32_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei32_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei32_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei32_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei32_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei32_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei32_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei32_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei32_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei32_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei32_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei32_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei32_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei32_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei32_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei32_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei32_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei32_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei32_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei32_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei32_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei32_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei32_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei32_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei32_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei32_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei32_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei32_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei32_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei32_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei32_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei32_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei32_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei32_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei32_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei32_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei32_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei32_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei32_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei32_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei32_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei32_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei32_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei32_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei32_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei32_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei32_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei32_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei32_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei32_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei32_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei32_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei32_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei32_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei32_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei32_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei32_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei32_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei32_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei32_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei32_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei32_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei32_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei32_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei32_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei32_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei32_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei32_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei32_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei32_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei32_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei32_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei32_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei32_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei32_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei32_v_u16mf4x5_tumu(vbool64_t vm, + vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei32_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei32_v_u16mf2x5_tumu(vbool32_t vm, + vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei32_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei32_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei32_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei32_v_u32mf2x5_tumu(vbool64_t vm, + vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei32_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei32_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei32_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei32_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei32_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei32_v_f16mf4x5_mu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei32_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei32_v_f16mf2x5_mu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei32_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei32_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei32_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei32_v_f32mf2x5_mu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei32_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei32_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei32_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei32_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei32_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei32_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei32_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei32_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei32_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei32_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei32_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei32_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei32_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei32_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei32_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei32_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei32_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei32_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei32_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei32_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei32_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei32_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei32_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei32_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei32_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei32_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei32_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei32_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei32_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei32_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei32_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei32_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg5ei32_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei32_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei32_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei32_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei32_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei32_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei32_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei32_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei32_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei32_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei32_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei32_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei32_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei32_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c index 73f5186d8..5ac7ae1aa 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x5_t test_vluxseg5ei64_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei64_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei64_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei64_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei64_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei64_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei64_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei64_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei64_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei64_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei64_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei64_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei64_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei64_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei64_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei64_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei64_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei64_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei64_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei64_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei64_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei64_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei64_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei64_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei64_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei64_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei64_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei64_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei64_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei64_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei64_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei64_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei64_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei64_v_u8mf8x5_tu(vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei64_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei64_v_u8mf4x5_tu(vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei64_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei64_v_u8mf2x5_tu(vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei64_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei64_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei64_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei64_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei64_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei64_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei64_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei64_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei64_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei64_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei64_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei64_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei64_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei64_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei64_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei64_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei64_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei64_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei64_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei64_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei64_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei64_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei64_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei64_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei64_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei64_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei64_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei64_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei64_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei64_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei64_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei64_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei64_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei64_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei64_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei64_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei64_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei64_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei64_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei64_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei64_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei64_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei64_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei64_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei64_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei64_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei64_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei64_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei64_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei64_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei64_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei64_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei64_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei64_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei64_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei64_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei64_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei64_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei64_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei64_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei64_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei64_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei64_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei64_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei64_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei64_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei64_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei64_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei64_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei64_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei64_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei64_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei64_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei64_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei64_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei64_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei64_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei64_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei64_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei64_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei64_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei64_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei64_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei64_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei64_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei64_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei64_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei64_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei64_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei64_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei64_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei64_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei64_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei64_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei64_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei64_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei64_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei64_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei64_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei64_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei64_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei64_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei64_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei64_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei64_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei64_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei64_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei64_v_u16mf4x5_tumu(vbool64_t vm, + vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei64_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei64_v_u16mf2x5_tumu(vbool32_t vm, + vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei64_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei64_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei64_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei64_v_u32mf2x5_tumu(vbool64_t vm, + vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei64_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei64_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei64_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei64_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei64_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei64_v_f16mf4x5_mu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei64_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei64_v_f16mf2x5_mu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei64_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei64_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei64_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei64_v_f32mf2x5_mu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei64_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei64_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei64_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei64_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei64_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei64_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei64_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei64_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei64_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei64_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei64_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei64_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei64_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei64_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei64_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei64_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei64_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei64_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei64_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei64_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei64_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei64_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei64_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei64_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei64_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei64_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei64_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei64_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei64_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei64_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei64_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei64_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg5ei64_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei64_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei64_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei64_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei64_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei64_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei64_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei64_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei64_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei64_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei64_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei64_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei64_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg5ei64_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c index 2a8c95a19..a783f6343 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c @@ -6,418 +6,624 @@ #include -vfloat16mf4x5_t test_vluxseg5ei8_v_f16mf4x5_tu(vfloat16mf4x5_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei8_v_f16mf4x5_tu(vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16mf4x5_tu(vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei8_v_f16mf2x5_tu(vfloat16mf2x5_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei8_v_f16mf2x5_tu(vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16mf2x5_tu(vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei8_v_f16m1x5_tu(vfloat16m1x5_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei8_v_f16m1x5_tu(vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16m1x5_tu(vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei8_v_f32mf2x5_tu(vfloat32mf2x5_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei8_v_f32mf2x5_tu(vfloat32mf2x5_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f32mf2x5_tu(vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei8_v_f32m1x5_tu(vfloat32m1x5_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei8_v_f32m1x5_tu(vfloat32m1x5_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_f32m1x5_tu(vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei8_v_f64m1x5_tu(vfloat64m1x5_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei8_v_f64m1x5_tu(vfloat64m1x5_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_f64m1x5_tu(vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei8_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei8_v_i8mf8x5_tu(vint8mf8x5_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i8mf8x5_tu(vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei8_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei8_v_i8mf4x5_tu(vint8mf4x5_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i8mf4x5_tu(vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei8_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei8_v_i8mf2x5_tu(vint8mf2x5_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i8mf2x5_tu(vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei8_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei8_v_i8m1x5_tu(vint8m1x5_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i8m1x5_tu(vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei8_v_i16mf4x5_tu(vint16mf4x5_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei8_v_i16mf4x5_tu(vint16mf4x5_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16mf4x5_tu(vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei8_v_i16mf2x5_tu(vint16mf2x5_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei8_v_i16mf2x5_tu(vint16mf2x5_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16mf2x5_tu(vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei8_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei8_v_i16m1x5_tu(vint16m1x5_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16m1x5_tu(vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei8_v_i32mf2x5_tu(vint32mf2x5_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei8_v_i32mf2x5_tu(vint32mf2x5_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i32mf2x5_tu(vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei8_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei8_v_i32m1x5_tu(vint32m1x5_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i32m1x5_tu(vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei8_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei8_v_i64m1x5_tu(vint64m1x5_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i64m1x5_tu(vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei8_v_u8mf8x5_tu(vuint8mf8x5_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei8_v_u8mf8x5_tu(vuint8mf8x5_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8mf8x5_tu(vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei8_v_u8mf4x5_tu(vuint8mf4x5_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei8_v_u8mf4x5_tu(vuint8mf4x5_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8mf4x5_tu(vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei8_v_u8mf2x5_tu(vuint8mf2x5_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei8_v_u8mf2x5_tu(vuint8mf2x5_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8mf2x5_tu(vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei8_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei8_v_u8m1x5_tu(vuint8m1x5_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u8m1x5_tu(vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei8_v_u16mf4x5_tu(vuint16mf4x5_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei8_v_u16mf4x5_tu(vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16mf4x5_tu(vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei8_v_u16mf2x5_tu(vuint16mf2x5_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei8_v_u16mf2x5_tu(vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16mf2x5_tu(vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei8_v_u16m1x5_tu(vuint16m1x5_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei8_v_u16m1x5_tu(vuint16m1x5_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16m1x5_tu(vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei8_v_u32mf2x5_tu(vuint32mf2x5_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei8_v_u32mf2x5_tu(vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u32mf2x5_tu(vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei8_v_u32m1x5_tu(vuint32m1x5_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei8_v_u32m1x5_tu(vuint32m1x5_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u32m1x5_tu(vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei8_v_u64m1x5_tu(vuint64m1x5_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei8_v_u64m1x5_tu(vuint64m1x5_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u64m1x5_tu(vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei8_v_f16mf4x5_tum(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei8_v_f16mf4x5_tum(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei8_v_f16mf2x5_tum(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei8_v_f16mf2x5_tum(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei8_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei8_v_f16m1x5_tum(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei8_v_f32mf2x5_tum(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei8_v_f32mf2x5_tum(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei8_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei8_v_f32m1x5_tum(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_f32m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei8_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei8_v_f64m1x5_tum(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f64m1x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei8_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei8_v_i8mf8x5_tum(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei8_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei8_v_i8mf4x5_tum(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei8_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei8_v_i8mf2x5_tum(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei8_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei8_v_i8m1x5_tum(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8m1x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei8_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei8_v_i16mf4x5_tum(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei8_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei8_v_i16mf2x5_tum(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei8_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei8_v_i16m1x5_tum(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i16m1x5_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei8_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei8_v_i32mf2x5_tum(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei8_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei8_v_i32m1x5_tum(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i32m1x5_tum(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei8_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei8_v_i64m1x5_tum(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i64m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei8_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei8_v_u8mf8x5_tum(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u8mf8x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei8_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei8_v_u8mf4x5_tum(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u8mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei8_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei8_v_u8mf2x5_tum(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u8mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei8_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei8_v_u8m1x5_tum(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei8_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei8_v_u16mf4x5_tum(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei8_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei8_v_u16mf2x5_tum(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei8_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei8_v_u16m1x5_tum(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei8_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei8_v_u32mf2x5_tum(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u32mf2x5_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei8_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei8_v_u32m1x5_tum(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u32m1x5_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei8_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei8_v_u64m1x5_tum(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u64m1x5_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei8_v_f16mf4x5_tumu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei8_v_f16mf4x5_tumu(vbool64_t vm, + vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei8_v_f16mf2x5_tumu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei8_v_f16mf2x5_tumu(vbool32_t vm, + vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei8_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei8_v_f16m1x5_tumu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei8_v_f32mf2x5_tumu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei8_v_f32mf2x5_tumu(vbool64_t vm, + vfloat32mf2x5_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei8_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei8_v_f32m1x5_tumu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei8_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei8_v_f64m1x5_tumu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei8_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei8_v_i8mf8x5_tumu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei8_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei8_v_i8mf4x5_tumu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei8_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei8_v_i8mf2x5_tumu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei8_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei8_v_i8m1x5_tumu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei8_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei8_v_i16mf4x5_tumu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei8_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei8_v_i16mf2x5_tumu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei8_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei8_v_i16m1x5_tumu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei8_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei8_v_i32mf2x5_tumu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei8_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei8_v_i32m1x5_tumu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei8_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei8_v_i64m1x5_tumu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei8_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei8_v_u8mf8x5_tumu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u8mf8x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei8_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei8_v_u8mf4x5_tumu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u8mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei8_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei8_v_u8mf2x5_tumu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u8mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei8_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei8_v_u8m1x5_tumu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei8_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei8_v_u16mf4x5_tumu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei8_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei8_v_u16mf2x5_tumu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei8_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei8_v_u16m1x5_tumu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei8_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei8_v_u32mf2x5_tumu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u32mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei8_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei8_v_u32m1x5_tumu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u32m1x5_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei8_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei8_v_u64m1x5_tumu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u64m1x5_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x5_t test_vluxseg5ei8_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x5_t test_vluxseg5ei8_v_f16mf4x5_mu(vbool64_t vm, vfloat16mf4x5_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x5_t test_vluxseg5ei8_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x5_t test_vluxseg5ei8_v_f16mf2x5_mu(vbool32_t vm, vfloat16mf2x5_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x5_t test_vluxseg5ei8_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x5_t test_vluxseg5ei8_v_f16m1x5_mu(vbool16_t vm, vfloat16m1x5_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f16m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x5_t test_vluxseg5ei8_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x5_t test_vluxseg5ei8_v_f32mf2x5_mu(vbool64_t vm, vfloat32mf2x5_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_f32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x5_t test_vluxseg5ei8_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x5_t test_vluxseg5ei8_v_f32m1x5_mu(vbool32_t vm, vfloat32m1x5_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_f32m1x5_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x5_t test_vluxseg5ei8_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x5_t test_vluxseg5ei8_v_f64m1x5_mu(vbool64_t vm, vfloat64m1x5_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_f64m1x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x5_t test_vluxseg5ei8_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x5_t test_vluxseg5ei8_v_i8mf8x5_mu(vbool64_t vm, vint8mf8x5_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x5_t test_vluxseg5ei8_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x5_t test_vluxseg5ei8_v_i8mf4x5_mu(vbool32_t vm, vint8mf4x5_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x5_t test_vluxseg5ei8_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x5_t test_vluxseg5ei8_v_i8mf2x5_mu(vbool16_t vm, vint8mf2x5_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint8m1x5_t test_vluxseg5ei8_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x5_t test_vluxseg5ei8_v_i8m1x5_mu(vbool8_t vm, vint8m1x5_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i8m1x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x5_t test_vluxseg5ei8_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x5_t test_vluxseg5ei8_v_i16mf4x5_mu(vbool64_t vm, vint16mf4x5_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x5_t test_vluxseg5ei8_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x5_t test_vluxseg5ei8_v_i16mf2x5_mu(vbool32_t vm, vint16mf2x5_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint16m1x5_t test_vluxseg5ei8_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x5_t test_vluxseg5ei8_v_i16m1x5_mu(vbool16_t vm, vint16m1x5_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i16m1x5_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x5_t test_vluxseg5ei8_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x5_t test_vluxseg5ei8_v_i32mf2x5_mu(vbool64_t vm, vint32mf2x5_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_i32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vint32m1x5_t test_vluxseg5ei8_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x5_t test_vluxseg5ei8_v_i32m1x5_mu(vbool32_t vm, vint32m1x5_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i32m1x5_mu(vm, vd, rs1, rs2, vl); } -vint64m1x5_t test_vluxseg5ei8_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x5_t test_vluxseg5ei8_v_i64m1x5_mu(vbool64_t vm, vint64m1x5_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_i64m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x5_t test_vluxseg5ei8_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x5_t test_vluxseg5ei8_v_u8mf8x5_mu(vbool64_t vm, vuint8mf8x5_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8mf8x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x5_t test_vluxseg5ei8_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x5_t test_vluxseg5ei8_v_u8mf4x5_mu(vbool32_t vm, vuint8mf4x5_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x5_t test_vluxseg5ei8_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x5_t test_vluxseg5ei8_v_u8mf2x5_mu(vbool16_t vm, vuint8mf2x5_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x5_t test_vluxseg5ei8_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x5_t test_vluxseg5ei8_v_u8m1x5_mu(vbool8_t vm, vuint8m1x5_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg5ei8_v_u8m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x5_t test_vluxseg5ei8_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x5_t test_vluxseg5ei8_v_u16mf4x5_mu(vbool64_t vm, vuint16mf4x5_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x5_t test_vluxseg5ei8_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x5_t test_vluxseg5ei8_v_u16mf2x5_mu(vbool32_t vm, vuint16mf2x5_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x5_t test_vluxseg5ei8_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x5_t test_vluxseg5ei8_v_u16m1x5_mu(vbool16_t vm, vuint16m1x5_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u16m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x5_t test_vluxseg5ei8_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x5_t test_vluxseg5ei8_v_u32mf2x5_mu(vbool64_t vm, vuint32mf2x5_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u32mf2x5_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x5_t test_vluxseg5ei8_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x5_t test_vluxseg5ei8_v_u32m1x5_mu(vbool32_t vm, vuint32m1x5_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u32m1x5_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x5_t test_vluxseg5ei8_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x5_t test_vluxseg5ei8_v_u64m1x5_mu(vbool64_t vm, vuint64m1x5_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg5ei8_v_u64m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c index 5bc090112..05a3ecb7f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x6_t test_vluxseg6ei16_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei16_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei16_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei16_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei16_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei16_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei16_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei16_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei16_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei16_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei16_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei16_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei16_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei16_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei16_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei16_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei16_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei16_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei16_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei16_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei16_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei16_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei16_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei16_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei16_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei16_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei16_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei16_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei16_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei16_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei16_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei16_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei16_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei16_v_u8mf8x6_tu(vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei16_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei16_v_u8mf4x6_tu(vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei16_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei16_v_u8mf2x6_tu(vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei16_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei16_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei16_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei16_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei16_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei16_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei16_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei16_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei16_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei16_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei16_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei16_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei16_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei16_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei16_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei16_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei16_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei16_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei16_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei16_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei16_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei16_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei16_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei16_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei16_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei16_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei16_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei16_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei16_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei16_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei16_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei16_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei16_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei16_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei16_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei16_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei16_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei16_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei16_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei16_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei16_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei16_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei16_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei16_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei16_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei16_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei16_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei16_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei16_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei16_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei16_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei16_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei16_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei16_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei16_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei16_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei16_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei16_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei16_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei16_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei16_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei16_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei16_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei16_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei16_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei16_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei16_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei16_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei16_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei16_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei16_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei16_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei16_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei16_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei16_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei16_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei16_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei16_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei16_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei16_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei16_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei16_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei16_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei16_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei16_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei16_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei16_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei16_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei16_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei16_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei16_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei16_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei16_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei16_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei16_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei16_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei16_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei16_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei16_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei16_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei16_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei16_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei16_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei16_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei16_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei16_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei16_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei16_v_u16mf4x6_tumu(vbool64_t vm, + vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei16_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei16_v_u16mf2x6_tumu(vbool32_t vm, + vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei16_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei16_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei16_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei16_v_u32mf2x6_tumu(vbool64_t vm, + vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei16_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei16_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei16_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei16_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei16_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei16_v_f16mf4x6_mu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei16_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei16_v_f16mf2x6_mu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei16_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei16_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei16_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei16_v_f32mf2x6_mu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei16_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei16_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei16_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei16_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei16_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei16_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei16_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei16_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei16_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei16_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei16_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei16_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei16_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei16_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei16_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei16_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei16_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei16_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei16_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei16_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei16_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei16_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei16_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei16_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei16_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei16_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei16_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei16_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei16_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei16_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei16_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei16_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei16_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei16_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei16_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei16_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei16_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei16_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei16_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei16_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei16_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei16_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei16_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei16_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c index 6ad444b79..8d6006c07 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x6_t test_vluxseg6ei32_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei32_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei32_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei32_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei32_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei32_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei32_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei32_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei32_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei32_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei32_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei32_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei32_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei32_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei32_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei32_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei32_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei32_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei32_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei32_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei32_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei32_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei32_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei32_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei32_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei32_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei32_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei32_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei32_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei32_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei32_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei32_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei32_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei32_v_u8mf8x6_tu(vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei32_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei32_v_u8mf4x6_tu(vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei32_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei32_v_u8mf2x6_tu(vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei32_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei32_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei32_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei32_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei32_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei32_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei32_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei32_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei32_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei32_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei32_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei32_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei32_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei32_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei32_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei32_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei32_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei32_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei32_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei32_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei32_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei32_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei32_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei32_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei32_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei32_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei32_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei32_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei32_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei32_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei32_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei32_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei32_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei32_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei32_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei32_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei32_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei32_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei32_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei32_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei32_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei32_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei32_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei32_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei32_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei32_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei32_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei32_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei32_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei32_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei32_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei32_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei32_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei32_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei32_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei32_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei32_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei32_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei32_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei32_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei32_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei32_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei32_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei32_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei32_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei32_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei32_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei32_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei32_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei32_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei32_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei32_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei32_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei32_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei32_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei32_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei32_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei32_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei32_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei32_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei32_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei32_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei32_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei32_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei32_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei32_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei32_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei32_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei32_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei32_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei32_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei32_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei32_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei32_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei32_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei32_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei32_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei32_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei32_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei32_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei32_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei32_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei32_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei32_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei32_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei32_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei32_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei32_v_u16mf4x6_tumu(vbool64_t vm, + vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei32_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei32_v_u16mf2x6_tumu(vbool32_t vm, + vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei32_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei32_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei32_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei32_v_u32mf2x6_tumu(vbool64_t vm, + vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei32_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei32_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei32_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei32_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei32_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei32_v_f16mf4x6_mu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei32_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei32_v_f16mf2x6_mu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei32_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei32_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei32_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei32_v_f32mf2x6_mu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei32_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei32_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei32_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei32_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei32_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei32_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei32_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei32_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei32_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei32_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei32_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei32_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei32_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei32_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei32_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei32_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei32_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei32_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei32_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei32_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei32_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei32_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei32_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei32_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei32_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei32_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei32_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei32_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei32_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei32_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei32_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei32_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg6ei32_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei32_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei32_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei32_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei32_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei32_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei32_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei32_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei32_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei32_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei32_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei32_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei32_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei32_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c index 47ecf4875..238da64dc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x6_t test_vluxseg6ei64_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei64_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei64_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei64_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei64_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei64_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei64_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei64_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei64_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei64_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei64_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei64_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei64_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei64_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei64_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei64_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei64_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei64_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei64_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei64_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei64_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei64_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei64_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei64_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei64_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei64_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei64_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei64_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei64_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei64_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei64_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei64_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei64_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei64_v_u8mf8x6_tu(vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei64_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei64_v_u8mf4x6_tu(vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei64_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei64_v_u8mf2x6_tu(vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei64_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei64_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei64_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei64_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei64_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei64_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei64_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei64_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei64_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei64_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei64_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei64_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei64_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei64_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei64_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei64_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei64_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei64_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei64_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei64_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei64_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei64_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei64_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei64_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei64_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei64_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei64_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei64_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei64_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei64_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei64_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei64_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei64_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei64_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei64_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei64_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei64_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei64_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei64_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei64_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei64_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei64_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei64_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei64_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei64_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei64_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei64_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei64_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei64_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei64_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei64_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei64_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei64_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei64_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei64_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei64_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei64_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei64_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei64_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei64_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei64_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei64_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei64_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei64_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei64_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei64_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei64_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei64_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei64_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei64_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei64_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei64_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei64_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei64_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei64_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei64_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei64_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei64_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei64_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei64_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei64_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei64_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei64_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei64_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei64_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei64_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei64_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei64_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei64_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei64_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei64_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei64_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei64_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei64_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei64_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei64_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei64_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei64_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei64_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei64_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei64_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei64_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei64_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei64_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei64_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei64_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei64_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei64_v_u16mf4x6_tumu(vbool64_t vm, + vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei64_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei64_v_u16mf2x6_tumu(vbool32_t vm, + vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei64_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei64_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei64_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei64_v_u32mf2x6_tumu(vbool64_t vm, + vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei64_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei64_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei64_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei64_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei64_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei64_v_f16mf4x6_mu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei64_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei64_v_f16mf2x6_mu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei64_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei64_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei64_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei64_v_f32mf2x6_mu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei64_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei64_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei64_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei64_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei64_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei64_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei64_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei64_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei64_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei64_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei64_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei64_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei64_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei64_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei64_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei64_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei64_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei64_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei64_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei64_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei64_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei64_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei64_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei64_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei64_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei64_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei64_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei64_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei64_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei64_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei64_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei64_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg6ei64_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei64_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei64_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei64_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei64_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei64_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei64_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei64_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei64_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei64_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei64_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei64_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei64_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg6ei64_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c index 958706261..3e6141bc1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c @@ -6,418 +6,624 @@ #include -vfloat16mf4x6_t test_vluxseg6ei8_v_f16mf4x6_tu(vfloat16mf4x6_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei8_v_f16mf4x6_tu(vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16mf4x6_tu(vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei8_v_f16mf2x6_tu(vfloat16mf2x6_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei8_v_f16mf2x6_tu(vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16mf2x6_tu(vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei8_v_f16m1x6_tu(vfloat16m1x6_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei8_v_f16m1x6_tu(vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16m1x6_tu(vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei8_v_f32mf2x6_tu(vfloat32mf2x6_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei8_v_f32mf2x6_tu(vfloat32mf2x6_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f32mf2x6_tu(vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei8_v_f32m1x6_tu(vfloat32m1x6_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei8_v_f32m1x6_tu(vfloat32m1x6_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_f32m1x6_tu(vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei8_v_f64m1x6_tu(vfloat64m1x6_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei8_v_f64m1x6_tu(vfloat64m1x6_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_f64m1x6_tu(vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei8_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei8_v_i8mf8x6_tu(vint8mf8x6_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i8mf8x6_tu(vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei8_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei8_v_i8mf4x6_tu(vint8mf4x6_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i8mf4x6_tu(vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei8_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei8_v_i8mf2x6_tu(vint8mf2x6_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i8mf2x6_tu(vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei8_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei8_v_i8m1x6_tu(vint8m1x6_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i8m1x6_tu(vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei8_v_i16mf4x6_tu(vint16mf4x6_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei8_v_i16mf4x6_tu(vint16mf4x6_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16mf4x6_tu(vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei8_v_i16mf2x6_tu(vint16mf2x6_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei8_v_i16mf2x6_tu(vint16mf2x6_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16mf2x6_tu(vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei8_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei8_v_i16m1x6_tu(vint16m1x6_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16m1x6_tu(vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei8_v_i32mf2x6_tu(vint32mf2x6_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei8_v_i32mf2x6_tu(vint32mf2x6_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i32mf2x6_tu(vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei8_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei8_v_i32m1x6_tu(vint32m1x6_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i32m1x6_tu(vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei8_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei8_v_i64m1x6_tu(vint64m1x6_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i64m1x6_tu(vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei8_v_u8mf8x6_tu(vuint8mf8x6_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei8_v_u8mf8x6_tu(vuint8mf8x6_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8mf8x6_tu(vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei8_v_u8mf4x6_tu(vuint8mf4x6_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei8_v_u8mf4x6_tu(vuint8mf4x6_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8mf4x6_tu(vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei8_v_u8mf2x6_tu(vuint8mf2x6_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei8_v_u8mf2x6_tu(vuint8mf2x6_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8mf2x6_tu(vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei8_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei8_v_u8m1x6_tu(vuint8m1x6_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u8m1x6_tu(vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei8_v_u16mf4x6_tu(vuint16mf4x6_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei8_v_u16mf4x6_tu(vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16mf4x6_tu(vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei8_v_u16mf2x6_tu(vuint16mf2x6_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei8_v_u16mf2x6_tu(vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16mf2x6_tu(vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei8_v_u16m1x6_tu(vuint16m1x6_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei8_v_u16m1x6_tu(vuint16m1x6_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16m1x6_tu(vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei8_v_u32mf2x6_tu(vuint32mf2x6_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei8_v_u32mf2x6_tu(vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u32mf2x6_tu(vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei8_v_u32m1x6_tu(vuint32m1x6_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei8_v_u32m1x6_tu(vuint32m1x6_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u32m1x6_tu(vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei8_v_u64m1x6_tu(vuint64m1x6_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei8_v_u64m1x6_tu(vuint64m1x6_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u64m1x6_tu(vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei8_v_f16mf4x6_tum(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei8_v_f16mf4x6_tum(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei8_v_f16mf2x6_tum(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei8_v_f16mf2x6_tum(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei8_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei8_v_f16m1x6_tum(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei8_v_f32mf2x6_tum(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei8_v_f32mf2x6_tum(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei8_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei8_v_f32m1x6_tum(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_f32m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei8_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei8_v_f64m1x6_tum(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f64m1x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei8_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei8_v_i8mf8x6_tum(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei8_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei8_v_i8mf4x6_tum(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei8_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei8_v_i8mf2x6_tum(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei8_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei8_v_i8m1x6_tum(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8m1x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei8_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei8_v_i16mf4x6_tum(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei8_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei8_v_i16mf2x6_tum(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei8_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei8_v_i16m1x6_tum(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i16m1x6_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei8_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei8_v_i32mf2x6_tum(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei8_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei8_v_i32m1x6_tum(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i32m1x6_tum(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei8_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei8_v_i64m1x6_tum(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i64m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei8_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei8_v_u8mf8x6_tum(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u8mf8x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei8_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei8_v_u8mf4x6_tum(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u8mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei8_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei8_v_u8mf2x6_tum(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u8mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei8_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei8_v_u8m1x6_tum(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei8_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei8_v_u16mf4x6_tum(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei8_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei8_v_u16mf2x6_tum(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei8_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei8_v_u16m1x6_tum(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei8_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei8_v_u32mf2x6_tum(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u32mf2x6_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei8_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei8_v_u32m1x6_tum(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u32m1x6_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei8_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei8_v_u64m1x6_tum(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u64m1x6_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei8_v_f16mf4x6_tumu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei8_v_f16mf4x6_tumu(vbool64_t vm, + vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei8_v_f16mf2x6_tumu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei8_v_f16mf2x6_tumu(vbool32_t vm, + vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei8_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei8_v_f16m1x6_tumu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei8_v_f32mf2x6_tumu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei8_v_f32mf2x6_tumu(vbool64_t vm, + vfloat32mf2x6_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei8_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei8_v_f32m1x6_tumu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei8_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei8_v_f64m1x6_tumu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei8_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei8_v_i8mf8x6_tumu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei8_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei8_v_i8mf4x6_tumu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei8_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei8_v_i8mf2x6_tumu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei8_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei8_v_i8m1x6_tumu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei8_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei8_v_i16mf4x6_tumu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei8_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei8_v_i16mf2x6_tumu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei8_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei8_v_i16m1x6_tumu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei8_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei8_v_i32mf2x6_tumu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei8_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei8_v_i32m1x6_tumu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei8_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei8_v_i64m1x6_tumu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei8_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei8_v_u8mf8x6_tumu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u8mf8x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei8_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei8_v_u8mf4x6_tumu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u8mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei8_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei8_v_u8mf2x6_tumu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u8mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei8_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei8_v_u8m1x6_tumu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei8_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei8_v_u16mf4x6_tumu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei8_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei8_v_u16mf2x6_tumu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei8_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei8_v_u16m1x6_tumu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei8_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei8_v_u32mf2x6_tumu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u32mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei8_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei8_v_u32m1x6_tumu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u32m1x6_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei8_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei8_v_u64m1x6_tumu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u64m1x6_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x6_t test_vluxseg6ei8_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x6_t test_vluxseg6ei8_v_f16mf4x6_mu(vbool64_t vm, vfloat16mf4x6_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x6_t test_vluxseg6ei8_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x6_t test_vluxseg6ei8_v_f16mf2x6_mu(vbool32_t vm, vfloat16mf2x6_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x6_t test_vluxseg6ei8_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x6_t test_vluxseg6ei8_v_f16m1x6_mu(vbool16_t vm, vfloat16m1x6_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f16m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x6_t test_vluxseg6ei8_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x6_t test_vluxseg6ei8_v_f32mf2x6_mu(vbool64_t vm, vfloat32mf2x6_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_f32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x6_t test_vluxseg6ei8_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x6_t test_vluxseg6ei8_v_f32m1x6_mu(vbool32_t vm, vfloat32m1x6_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_f32m1x6_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x6_t test_vluxseg6ei8_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x6_t test_vluxseg6ei8_v_f64m1x6_mu(vbool64_t vm, vfloat64m1x6_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_f64m1x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x6_t test_vluxseg6ei8_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x6_t test_vluxseg6ei8_v_i8mf8x6_mu(vbool64_t vm, vint8mf8x6_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x6_t test_vluxseg6ei8_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x6_t test_vluxseg6ei8_v_i8mf4x6_mu(vbool32_t vm, vint8mf4x6_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x6_t test_vluxseg6ei8_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x6_t test_vluxseg6ei8_v_i8mf2x6_mu(vbool16_t vm, vint8mf2x6_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint8m1x6_t test_vluxseg6ei8_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x6_t test_vluxseg6ei8_v_i8m1x6_mu(vbool8_t vm, vint8m1x6_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i8m1x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x6_t test_vluxseg6ei8_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x6_t test_vluxseg6ei8_v_i16mf4x6_mu(vbool64_t vm, vint16mf4x6_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x6_t test_vluxseg6ei8_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x6_t test_vluxseg6ei8_v_i16mf2x6_mu(vbool32_t vm, vint16mf2x6_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint16m1x6_t test_vluxseg6ei8_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x6_t test_vluxseg6ei8_v_i16m1x6_mu(vbool16_t vm, vint16m1x6_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i16m1x6_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x6_t test_vluxseg6ei8_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x6_t test_vluxseg6ei8_v_i32mf2x6_mu(vbool64_t vm, vint32mf2x6_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_i32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vint32m1x6_t test_vluxseg6ei8_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x6_t test_vluxseg6ei8_v_i32m1x6_mu(vbool32_t vm, vint32m1x6_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i32m1x6_mu(vm, vd, rs1, rs2, vl); } -vint64m1x6_t test_vluxseg6ei8_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x6_t test_vluxseg6ei8_v_i64m1x6_mu(vbool64_t vm, vint64m1x6_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_i64m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x6_t test_vluxseg6ei8_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x6_t test_vluxseg6ei8_v_u8mf8x6_mu(vbool64_t vm, vuint8mf8x6_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8mf8x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x6_t test_vluxseg6ei8_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x6_t test_vluxseg6ei8_v_u8mf4x6_mu(vbool32_t vm, vuint8mf4x6_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x6_t test_vluxseg6ei8_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x6_t test_vluxseg6ei8_v_u8mf2x6_mu(vbool16_t vm, vuint8mf2x6_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x6_t test_vluxseg6ei8_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x6_t test_vluxseg6ei8_v_u8m1x6_mu(vbool8_t vm, vuint8m1x6_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg6ei8_v_u8m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x6_t test_vluxseg6ei8_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x6_t test_vluxseg6ei8_v_u16mf4x6_mu(vbool64_t vm, vuint16mf4x6_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x6_t test_vluxseg6ei8_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x6_t test_vluxseg6ei8_v_u16mf2x6_mu(vbool32_t vm, vuint16mf2x6_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x6_t test_vluxseg6ei8_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x6_t test_vluxseg6ei8_v_u16m1x6_mu(vbool16_t vm, vuint16m1x6_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u16m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x6_t test_vluxseg6ei8_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x6_t test_vluxseg6ei8_v_u32mf2x6_mu(vbool64_t vm, vuint32mf2x6_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u32mf2x6_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x6_t test_vluxseg6ei8_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x6_t test_vluxseg6ei8_v_u32m1x6_mu(vbool32_t vm, vuint32m1x6_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u32m1x6_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x6_t test_vluxseg6ei8_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x6_t test_vluxseg6ei8_v_u64m1x6_mu(vbool64_t vm, vuint64m1x6_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg6ei8_v_u64m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c index 013d1222b..a5bd9125b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x7_t test_vluxseg7ei16_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei16_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei16_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei16_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei16_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei16_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei16_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei16_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei16_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei16_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei16_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei16_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei16_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei16_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei16_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei16_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei16_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei16_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei16_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei16_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei16_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei16_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei16_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei16_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei16_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei16_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei16_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei16_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei16_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei16_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei16_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei16_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei16_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei16_v_u8mf8x7_tu(vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei16_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei16_v_u8mf4x7_tu(vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei16_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei16_v_u8mf2x7_tu(vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei16_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei16_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei16_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei16_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei16_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei16_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei16_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei16_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei16_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei16_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei16_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei16_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei16_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei16_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei16_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei16_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei16_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei16_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei16_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei16_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei16_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei16_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei16_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei16_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei16_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei16_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei16_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei16_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei16_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei16_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei16_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei16_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei16_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei16_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei16_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei16_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei16_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei16_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei16_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei16_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei16_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei16_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei16_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei16_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei16_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei16_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei16_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei16_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei16_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei16_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei16_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei16_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei16_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei16_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei16_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei16_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei16_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei16_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei16_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei16_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei16_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei16_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei16_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei16_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei16_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei16_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei16_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei16_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei16_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei16_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei16_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei16_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei16_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei16_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei16_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei16_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei16_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei16_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei16_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei16_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei16_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei16_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei16_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei16_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei16_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei16_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei16_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei16_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei16_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei16_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei16_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei16_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei16_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei16_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei16_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei16_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei16_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei16_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei16_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei16_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei16_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei16_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei16_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei16_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei16_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei16_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei16_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei16_v_u16mf4x7_tumu(vbool64_t vm, + vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei16_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei16_v_u16mf2x7_tumu(vbool32_t vm, + vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei16_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei16_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei16_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei16_v_u32mf2x7_tumu(vbool64_t vm, + vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei16_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei16_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei16_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei16_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei16_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei16_v_f16mf4x7_mu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei16_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei16_v_f16mf2x7_mu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei16_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei16_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei16_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei16_v_f32mf2x7_mu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei16_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei16_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei16_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei16_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei16_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei16_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei16_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei16_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei16_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei16_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei16_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei16_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei16_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei16_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei16_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei16_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei16_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei16_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei16_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei16_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei16_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei16_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei16_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei16_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei16_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei16_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei16_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei16_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei16_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei16_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei16_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei16_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei16_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei16_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei16_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei16_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei16_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei16_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei16_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei16_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei16_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei16_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei16_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei16_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c index bb2f0b841..7b82e6bda 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x7_t test_vluxseg7ei32_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei32_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei32_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei32_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei32_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei32_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei32_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei32_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei32_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei32_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei32_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei32_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei32_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei32_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei32_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei32_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei32_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei32_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei32_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei32_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei32_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei32_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei32_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei32_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei32_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei32_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei32_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei32_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei32_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei32_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei32_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei32_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei32_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei32_v_u8mf8x7_tu(vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei32_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei32_v_u8mf4x7_tu(vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei32_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei32_v_u8mf2x7_tu(vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei32_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei32_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei32_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei32_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei32_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei32_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei32_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei32_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei32_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei32_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei32_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei32_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei32_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei32_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei32_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei32_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei32_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei32_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei32_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei32_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei32_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei32_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei32_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei32_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei32_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei32_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei32_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei32_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei32_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei32_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei32_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei32_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei32_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei32_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei32_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei32_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei32_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei32_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei32_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei32_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei32_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei32_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei32_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei32_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei32_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei32_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei32_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei32_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei32_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei32_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei32_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei32_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei32_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei32_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei32_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei32_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei32_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei32_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei32_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei32_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei32_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei32_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei32_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei32_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei32_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei32_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei32_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei32_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei32_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei32_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei32_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei32_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei32_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei32_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei32_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei32_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei32_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei32_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei32_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei32_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei32_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei32_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei32_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei32_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei32_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei32_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei32_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei32_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei32_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei32_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei32_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei32_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei32_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei32_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei32_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei32_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei32_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei32_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei32_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei32_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei32_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei32_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei32_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei32_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei32_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei32_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei32_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei32_v_u16mf4x7_tumu(vbool64_t vm, + vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei32_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei32_v_u16mf2x7_tumu(vbool32_t vm, + vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei32_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei32_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei32_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei32_v_u32mf2x7_tumu(vbool64_t vm, + vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei32_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei32_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei32_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei32_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei32_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei32_v_f16mf4x7_mu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei32_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei32_v_f16mf2x7_mu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei32_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei32_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei32_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei32_v_f32mf2x7_mu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei32_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei32_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei32_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei32_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei32_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei32_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei32_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei32_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei32_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei32_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei32_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei32_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei32_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei32_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei32_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei32_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei32_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei32_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei32_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei32_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei32_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei32_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei32_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei32_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei32_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei32_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei32_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei32_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei32_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei32_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei32_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei32_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg7ei32_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei32_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei32_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei32_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei32_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei32_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei32_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei32_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei32_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei32_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei32_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei32_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei32_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei32_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c index 8d13adbe7..8c6449aa3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x7_t test_vluxseg7ei64_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei64_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei64_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei64_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei64_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei64_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei64_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei64_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei64_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei64_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei64_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei64_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei64_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei64_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei64_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei64_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei64_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei64_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei64_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei64_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei64_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei64_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei64_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei64_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei64_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei64_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei64_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei64_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei64_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei64_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei64_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei64_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei64_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei64_v_u8mf8x7_tu(vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei64_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei64_v_u8mf4x7_tu(vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei64_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei64_v_u8mf2x7_tu(vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei64_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei64_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei64_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei64_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei64_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei64_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei64_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei64_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei64_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei64_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei64_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei64_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei64_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei64_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei64_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei64_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei64_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei64_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei64_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei64_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei64_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei64_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei64_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei64_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei64_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei64_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei64_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei64_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei64_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei64_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei64_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei64_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei64_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei64_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei64_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei64_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei64_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei64_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei64_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei64_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei64_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei64_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei64_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei64_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei64_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei64_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei64_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei64_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei64_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei64_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei64_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei64_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei64_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei64_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei64_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei64_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei64_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei64_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei64_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei64_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei64_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei64_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei64_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei64_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei64_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei64_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei64_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei64_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei64_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei64_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei64_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei64_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei64_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei64_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei64_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei64_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei64_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei64_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei64_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei64_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei64_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei64_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei64_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei64_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei64_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei64_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei64_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei64_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei64_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei64_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei64_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei64_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei64_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei64_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei64_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei64_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei64_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei64_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei64_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei64_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei64_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei64_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei64_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei64_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei64_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei64_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei64_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei64_v_u16mf4x7_tumu(vbool64_t vm, + vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei64_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei64_v_u16mf2x7_tumu(vbool32_t vm, + vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei64_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei64_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei64_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei64_v_u32mf2x7_tumu(vbool64_t vm, + vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei64_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei64_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei64_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei64_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei64_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei64_v_f16mf4x7_mu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei64_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei64_v_f16mf2x7_mu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei64_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei64_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei64_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei64_v_f32mf2x7_mu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei64_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei64_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei64_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei64_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei64_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei64_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei64_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei64_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei64_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei64_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei64_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei64_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei64_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei64_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei64_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei64_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei64_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei64_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei64_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei64_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei64_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei64_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei64_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei64_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei64_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei64_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei64_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei64_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei64_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei64_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei64_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei64_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg7ei64_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei64_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei64_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei64_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei64_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei64_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei64_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei64_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei64_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei64_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei64_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei64_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei64_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg7ei64_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c index d5c7b0730..bce16c276 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c @@ -6,418 +6,624 @@ #include -vfloat16mf4x7_t test_vluxseg7ei8_v_f16mf4x7_tu(vfloat16mf4x7_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei8_v_f16mf4x7_tu(vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16mf4x7_tu(vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei8_v_f16mf2x7_tu(vfloat16mf2x7_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei8_v_f16mf2x7_tu(vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16mf2x7_tu(vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei8_v_f16m1x7_tu(vfloat16m1x7_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei8_v_f16m1x7_tu(vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16m1x7_tu(vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei8_v_f32mf2x7_tu(vfloat32mf2x7_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei8_v_f32mf2x7_tu(vfloat32mf2x7_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f32mf2x7_tu(vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei8_v_f32m1x7_tu(vfloat32m1x7_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei8_v_f32m1x7_tu(vfloat32m1x7_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_f32m1x7_tu(vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei8_v_f64m1x7_tu(vfloat64m1x7_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei8_v_f64m1x7_tu(vfloat64m1x7_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_f64m1x7_tu(vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei8_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei8_v_i8mf8x7_tu(vint8mf8x7_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i8mf8x7_tu(vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei8_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei8_v_i8mf4x7_tu(vint8mf4x7_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i8mf4x7_tu(vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei8_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei8_v_i8mf2x7_tu(vint8mf2x7_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i8mf2x7_tu(vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei8_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei8_v_i8m1x7_tu(vint8m1x7_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i8m1x7_tu(vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei8_v_i16mf4x7_tu(vint16mf4x7_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei8_v_i16mf4x7_tu(vint16mf4x7_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16mf4x7_tu(vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei8_v_i16mf2x7_tu(vint16mf2x7_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei8_v_i16mf2x7_tu(vint16mf2x7_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16mf2x7_tu(vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei8_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei8_v_i16m1x7_tu(vint16m1x7_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16m1x7_tu(vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei8_v_i32mf2x7_tu(vint32mf2x7_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei8_v_i32mf2x7_tu(vint32mf2x7_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i32mf2x7_tu(vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei8_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei8_v_i32m1x7_tu(vint32m1x7_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i32m1x7_tu(vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei8_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei8_v_i64m1x7_tu(vint64m1x7_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i64m1x7_tu(vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei8_v_u8mf8x7_tu(vuint8mf8x7_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei8_v_u8mf8x7_tu(vuint8mf8x7_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8mf8x7_tu(vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei8_v_u8mf4x7_tu(vuint8mf4x7_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei8_v_u8mf4x7_tu(vuint8mf4x7_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8mf4x7_tu(vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei8_v_u8mf2x7_tu(vuint8mf2x7_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei8_v_u8mf2x7_tu(vuint8mf2x7_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8mf2x7_tu(vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei8_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei8_v_u8m1x7_tu(vuint8m1x7_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u8m1x7_tu(vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei8_v_u16mf4x7_tu(vuint16mf4x7_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei8_v_u16mf4x7_tu(vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16mf4x7_tu(vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei8_v_u16mf2x7_tu(vuint16mf2x7_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei8_v_u16mf2x7_tu(vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16mf2x7_tu(vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei8_v_u16m1x7_tu(vuint16m1x7_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei8_v_u16m1x7_tu(vuint16m1x7_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16m1x7_tu(vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei8_v_u32mf2x7_tu(vuint32mf2x7_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei8_v_u32mf2x7_tu(vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u32mf2x7_tu(vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei8_v_u32m1x7_tu(vuint32m1x7_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei8_v_u32m1x7_tu(vuint32m1x7_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u32m1x7_tu(vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei8_v_u64m1x7_tu(vuint64m1x7_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei8_v_u64m1x7_tu(vuint64m1x7_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u64m1x7_tu(vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei8_v_f16mf4x7_tum(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei8_v_f16mf4x7_tum(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei8_v_f16mf2x7_tum(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei8_v_f16mf2x7_tum(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei8_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei8_v_f16m1x7_tum(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei8_v_f32mf2x7_tum(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei8_v_f32mf2x7_tum(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei8_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei8_v_f32m1x7_tum(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_f32m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei8_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei8_v_f64m1x7_tum(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f64m1x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei8_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei8_v_i8mf8x7_tum(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei8_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei8_v_i8mf4x7_tum(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei8_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei8_v_i8mf2x7_tum(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei8_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei8_v_i8m1x7_tum(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8m1x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei8_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei8_v_i16mf4x7_tum(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei8_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei8_v_i16mf2x7_tum(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei8_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei8_v_i16m1x7_tum(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i16m1x7_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei8_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei8_v_i32mf2x7_tum(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei8_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei8_v_i32m1x7_tum(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i32m1x7_tum(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei8_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei8_v_i64m1x7_tum(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i64m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei8_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei8_v_u8mf8x7_tum(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u8mf8x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei8_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei8_v_u8mf4x7_tum(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u8mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei8_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei8_v_u8mf2x7_tum(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u8mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei8_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei8_v_u8m1x7_tum(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei8_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei8_v_u16mf4x7_tum(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei8_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei8_v_u16mf2x7_tum(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei8_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei8_v_u16m1x7_tum(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei8_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei8_v_u32mf2x7_tum(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u32mf2x7_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei8_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei8_v_u32m1x7_tum(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u32m1x7_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei8_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei8_v_u64m1x7_tum(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u64m1x7_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei8_v_f16mf4x7_tumu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei8_v_f16mf4x7_tumu(vbool64_t vm, + vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei8_v_f16mf2x7_tumu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei8_v_f16mf2x7_tumu(vbool32_t vm, + vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei8_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei8_v_f16m1x7_tumu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei8_v_f32mf2x7_tumu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei8_v_f32mf2x7_tumu(vbool64_t vm, + vfloat32mf2x7_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei8_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei8_v_f32m1x7_tumu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei8_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei8_v_f64m1x7_tumu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei8_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei8_v_i8mf8x7_tumu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei8_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei8_v_i8mf4x7_tumu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei8_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei8_v_i8mf2x7_tumu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei8_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei8_v_i8m1x7_tumu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei8_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei8_v_i16mf4x7_tumu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei8_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei8_v_i16mf2x7_tumu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei8_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei8_v_i16m1x7_tumu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei8_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei8_v_i32mf2x7_tumu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei8_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei8_v_i32m1x7_tumu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei8_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei8_v_i64m1x7_tumu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei8_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei8_v_u8mf8x7_tumu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u8mf8x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei8_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei8_v_u8mf4x7_tumu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u8mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei8_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei8_v_u8mf2x7_tumu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u8mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei8_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei8_v_u8m1x7_tumu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei8_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei8_v_u16mf4x7_tumu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei8_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei8_v_u16mf2x7_tumu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei8_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei8_v_u16m1x7_tumu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei8_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei8_v_u32mf2x7_tumu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u32mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei8_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei8_v_u32m1x7_tumu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u32m1x7_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei8_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei8_v_u64m1x7_tumu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u64m1x7_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x7_t test_vluxseg7ei8_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x7_t test_vluxseg7ei8_v_f16mf4x7_mu(vbool64_t vm, vfloat16mf4x7_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x7_t test_vluxseg7ei8_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x7_t test_vluxseg7ei8_v_f16mf2x7_mu(vbool32_t vm, vfloat16mf2x7_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x7_t test_vluxseg7ei8_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x7_t test_vluxseg7ei8_v_f16m1x7_mu(vbool16_t vm, vfloat16m1x7_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f16m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x7_t test_vluxseg7ei8_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x7_t test_vluxseg7ei8_v_f32mf2x7_mu(vbool64_t vm, vfloat32mf2x7_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_f32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x7_t test_vluxseg7ei8_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x7_t test_vluxseg7ei8_v_f32m1x7_mu(vbool32_t vm, vfloat32m1x7_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_f32m1x7_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x7_t test_vluxseg7ei8_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x7_t test_vluxseg7ei8_v_f64m1x7_mu(vbool64_t vm, vfloat64m1x7_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_f64m1x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x7_t test_vluxseg7ei8_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x7_t test_vluxseg7ei8_v_i8mf8x7_mu(vbool64_t vm, vint8mf8x7_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x7_t test_vluxseg7ei8_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x7_t test_vluxseg7ei8_v_i8mf4x7_mu(vbool32_t vm, vint8mf4x7_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x7_t test_vluxseg7ei8_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x7_t test_vluxseg7ei8_v_i8mf2x7_mu(vbool16_t vm, vint8mf2x7_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint8m1x7_t test_vluxseg7ei8_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x7_t test_vluxseg7ei8_v_i8m1x7_mu(vbool8_t vm, vint8m1x7_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i8m1x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x7_t test_vluxseg7ei8_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x7_t test_vluxseg7ei8_v_i16mf4x7_mu(vbool64_t vm, vint16mf4x7_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x7_t test_vluxseg7ei8_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x7_t test_vluxseg7ei8_v_i16mf2x7_mu(vbool32_t vm, vint16mf2x7_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint16m1x7_t test_vluxseg7ei8_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x7_t test_vluxseg7ei8_v_i16m1x7_mu(vbool16_t vm, vint16m1x7_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i16m1x7_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x7_t test_vluxseg7ei8_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x7_t test_vluxseg7ei8_v_i32mf2x7_mu(vbool64_t vm, vint32mf2x7_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_i32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vint32m1x7_t test_vluxseg7ei8_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x7_t test_vluxseg7ei8_v_i32m1x7_mu(vbool32_t vm, vint32m1x7_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i32m1x7_mu(vm, vd, rs1, rs2, vl); } -vint64m1x7_t test_vluxseg7ei8_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x7_t test_vluxseg7ei8_v_i64m1x7_mu(vbool64_t vm, vint64m1x7_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_i64m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x7_t test_vluxseg7ei8_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x7_t test_vluxseg7ei8_v_u8mf8x7_mu(vbool64_t vm, vuint8mf8x7_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8mf8x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x7_t test_vluxseg7ei8_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x7_t test_vluxseg7ei8_v_u8mf4x7_mu(vbool32_t vm, vuint8mf4x7_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x7_t test_vluxseg7ei8_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x7_t test_vluxseg7ei8_v_u8mf2x7_mu(vbool16_t vm, vuint8mf2x7_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x7_t test_vluxseg7ei8_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x7_t test_vluxseg7ei8_v_u8m1x7_mu(vbool8_t vm, vuint8m1x7_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg7ei8_v_u8m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x7_t test_vluxseg7ei8_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x7_t test_vluxseg7ei8_v_u16mf4x7_mu(vbool64_t vm, vuint16mf4x7_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x7_t test_vluxseg7ei8_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x7_t test_vluxseg7ei8_v_u16mf2x7_mu(vbool32_t vm, vuint16mf2x7_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x7_t test_vluxseg7ei8_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x7_t test_vluxseg7ei8_v_u16m1x7_mu(vbool16_t vm, vuint16m1x7_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u16m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x7_t test_vluxseg7ei8_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x7_t test_vluxseg7ei8_v_u32mf2x7_mu(vbool64_t vm, vuint32mf2x7_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u32mf2x7_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x7_t test_vluxseg7ei8_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x7_t test_vluxseg7ei8_v_u32m1x7_mu(vbool32_t vm, vuint32m1x7_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u32m1x7_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x7_t test_vluxseg7ei8_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x7_t test_vluxseg7ei8_v_u64m1x7_mu(vbool64_t vm, vuint64m1x7_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg7ei8_v_u64m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c index 8fd8b0c1c..08f5f6001 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x8_t test_vluxseg8ei16_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei16_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei16_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei16_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei16_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei16_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei16_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei16_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei16_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei16_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei16_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei16_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei16_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei16_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei16_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei16_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei16_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei16_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei16_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei16_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei16_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei16_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei16_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei16_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei16_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei16_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei16_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei16_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei16_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei16_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei16_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei16_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei16_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei16_v_u8mf8x8_tu(vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei16_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei16_v_u8mf4x8_tu(vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei16_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei16_v_u8mf2x8_tu(vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei16_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei16_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei16_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei16_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei16_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei16_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei16_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei16_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei16_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei16_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei16_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei16_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei16_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei16_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei16_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei16_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei16_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei16_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei16_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei16_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei16_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei16_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei16_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei16_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei16_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei16_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei16_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei16_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei16_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei16_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei16_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei16_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei16_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei16_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei16_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei16_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei16_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei16_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei16_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei16_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei16_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei16_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei16_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei16_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei16_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei16_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei16_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei16_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei16_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei16_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei16_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei16_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei16_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei16_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei16_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei16_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei16_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei16_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei16_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei16_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei16_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei16_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei16_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei16_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei16_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei16_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei16_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei16_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei16_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei16_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei16_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei16_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei16_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei16_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei16_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei16_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei16_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei16_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei16_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei16_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei16_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei16_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei16_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei16_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei16_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei16_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei16_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei16_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei16_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei16_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei16_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei16_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei16_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei16_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei16_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei16_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei16_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei16_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei16_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei16_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei16_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei16_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei16_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei16_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei16_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei16_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei16_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei16_v_u16mf4x8_tumu(vbool64_t vm, + vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei16_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei16_v_u16mf2x8_tumu(vbool32_t vm, + vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei16_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei16_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei16_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei16_v_u32mf2x8_tumu(vbool64_t vm, + vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei16_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei16_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei16_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei16_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei16_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei16_v_f16mf4x8_mu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei16_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei16_v_f16mf2x8_mu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei16_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint16m1_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei16_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei16_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei16_v_f32mf2x8_mu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei16_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint16mf2_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei16_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei16_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint16mf4_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei16_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei16_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei16_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei16_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei16_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei16_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint16m1_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei16_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei16_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint16m2_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei16_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei16_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei16_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei16_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei16_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei16_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint16m1_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei16_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei16_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei16_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei16_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei16_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei16_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei16_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei16_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei16_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei16_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei16_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei16_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei16_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei16_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint16m2_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei16_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei16_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei16_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei16_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei16_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei16_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint16m1_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei16_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei16_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei16_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei16_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint16mf2_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei16_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei16_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint16mf4_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei16_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c index fa316b6ed..efe20db35 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x8_t test_vluxseg8ei32_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei32_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei32_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei32_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei32_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei32_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei32_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei32_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei32_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei32_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei32_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei32_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei32_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei32_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei32_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei32_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei32_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei32_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei32_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei32_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei32_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei32_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei32_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei32_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei32_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei32_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei32_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei32_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei32_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei32_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei32_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei32_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei32_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei32_v_u8mf8x8_tu(vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei32_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei32_v_u8mf4x8_tu(vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei32_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei32_v_u8mf2x8_tu(vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei32_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei32_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei32_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei32_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei32_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei32_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei32_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei32_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei32_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei32_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei32_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei32_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei32_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei32_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei32_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei32_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei32_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei32_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei32_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei32_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei32_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei32_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei32_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei32_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei32_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei32_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei32_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei32_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei32_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei32_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei32_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei32_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei32_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei32_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei32_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei32_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei32_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei32_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei32_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei32_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei32_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei32_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei32_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei32_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei32_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei32_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei32_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei32_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei32_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei32_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei32_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei32_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei32_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei32_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei32_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei32_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei32_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei32_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei32_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei32_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei32_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei32_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei32_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei32_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei32_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei32_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei32_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei32_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei32_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei32_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei32_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei32_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei32_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei32_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei32_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei32_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei32_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei32_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei32_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei32_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei32_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei32_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei32_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei32_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei32_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei32_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei32_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei32_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei32_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei32_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei32_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei32_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei32_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei32_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei32_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei32_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei32_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei32_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei32_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei32_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei32_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei32_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei32_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei32_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei32_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei32_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, + vuint32m4_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei32_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei32_v_u16mf4x8_tumu(vbool64_t vm, + vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei32_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei32_v_u16mf2x8_tumu(vbool32_t vm, + vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei32_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei32_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei32_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei32_v_u32mf2x8_tumu(vbool64_t vm, + vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei32_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei32_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei32_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei32_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei32_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei32_v_f16mf4x8_mu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei32_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint32m1_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei32_v_f16mf2x8_mu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei32_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint32m2_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei32_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei32_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei32_v_f32mf2x8_mu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei32_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint32m1_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei32_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei32_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint32mf2_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei32_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei32_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei32_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint32mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei32_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint32m1_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei32_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei32_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint32m2_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei32_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei32_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint32m4_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei32_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei32_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei32_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei32_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint32m1_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei32_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei32_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint32m2_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei32_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint32m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei32_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei32_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei32_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint32m1_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei32_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, vuint32m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei32_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei32_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei32_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei32_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei32_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei32_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei32_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei32_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei32_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint32m4_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei32_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint32m4_t rs2, + size_t vl) { return __riscv_vluxseg8ei32_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei32_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei32_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei32_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei32_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei32_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint32m2_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei32_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint32m2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei32_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei32_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei32_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint32m1_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei32_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint32m1_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei32_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint32mf2_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei32_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint32mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei32_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c index 43af6fe46..249ecb665 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c @@ -6,418 +6,630 @@ #include -vfloat16mf4x8_t test_vluxseg8ei64_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei64_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei64_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei64_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei64_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei64_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei64_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei64_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei64_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei64_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei64_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei64_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei64_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei64_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei64_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei64_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei64_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei64_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei64_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei64_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei64_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei64_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei64_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei64_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei64_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei64_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei64_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei64_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei64_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei64_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei64_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei64_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei64_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei64_v_u8mf8x8_tu(vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei64_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei64_v_u8mf4x8_tu(vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei64_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei64_v_u8mf2x8_tu(vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei64_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei64_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei64_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei64_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei64_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei64_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei64_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei64_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei64_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei64_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei64_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei64_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei64_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei64_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei64_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei64_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei64_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei64_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei64_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei64_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei64_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei64_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei64_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei64_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei64_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei64_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei64_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei64_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei64_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei64_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei64_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei64_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei64_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei64_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei64_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei64_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei64_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei64_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei64_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei64_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei64_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei64_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei64_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei64_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei64_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei64_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei64_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei64_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei64_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei64_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei64_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei64_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei64_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei64_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei64_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei64_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei64_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei64_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei64_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei64_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei64_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei64_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei64_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei64_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei64_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei64_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei64_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei64_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei64_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei64_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei64_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei64_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei64_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei64_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei64_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei64_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei64_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei64_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei64_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei64_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei64_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei64_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei64_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei64_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei64_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei64_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei64_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei64_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei64_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei64_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei64_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei64_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei64_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei64_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei64_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei64_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei64_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei64_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei64_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei64_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei64_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei64_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei64_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei64_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei64_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei64_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, + vuint64m8_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei64_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei64_v_u16mf4x8_tumu(vbool64_t vm, + vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei64_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei64_v_u16mf2x8_tumu(vbool32_t vm, + vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei64_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei64_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei64_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei64_v_u32mf2x8_tumu(vbool64_t vm, + vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei64_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei64_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei64_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei64_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei64_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint64m1_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei64_v_f16mf4x8_mu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei64_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint64m2_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei64_v_f16mf2x8_mu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei64_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint64m4_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei64_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei64_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint64m1_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei64_v_f32mf2x8_mu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei64_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint64m2_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei64_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei64_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint64m1_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei64_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei64_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint64m1_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei64_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei64_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint64m2_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei64_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei64_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint64m4_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei64_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei64_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint64m8_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei64_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei64_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint64m1_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei64_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei64_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint64m2_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei64_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei64_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint64m4_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei64_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint64m4_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei64_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint64m1_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei64_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei64_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint64m2_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei64_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, vuint64m2_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei64_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint64m1_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei64_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, vuint64m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei64_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei64_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei64_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei64_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei64_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei64_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei64_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint64m8_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei64_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint64m8_t rs2, + size_t vl) { return __riscv_vluxseg8ei64_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei64_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei64_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei64_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei64_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei64_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint64m4_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei64_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint64m4_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei64_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei64_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei64_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint64m2_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei64_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint64m2_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei64_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint64m1_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei64_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint64m1_t rs2, size_t vl) { return __riscv_vluxseg8ei64_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c index 3afc6a59a..62525eee0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c @@ -6,418 +6,624 @@ #include -vfloat16mf4x8_t test_vluxseg8ei8_v_f16mf4x8_tu(vfloat16mf4x8_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei8_v_f16mf4x8_tu(vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16mf4x8_tu(vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei8_v_f16mf2x8_tu(vfloat16mf2x8_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei8_v_f16mf2x8_tu(vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16mf2x8_tu(vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei8_v_f16m1x8_tu(vfloat16m1x8_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei8_v_f16m1x8_tu(vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16m1x8_tu(vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei8_v_f32mf2x8_tu(vfloat32mf2x8_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei8_v_f32mf2x8_tu(vfloat32mf2x8_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f32mf2x8_tu(vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei8_v_f32m1x8_tu(vfloat32m1x8_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei8_v_f32m1x8_tu(vfloat32m1x8_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_f32m1x8_tu(vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei8_v_f64m1x8_tu(vfloat64m1x8_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei8_v_f64m1x8_tu(vfloat64m1x8_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_f64m1x8_tu(vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei8_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei8_v_i8mf8x8_tu(vint8mf8x8_t vd, const int8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i8mf8x8_tu(vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei8_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei8_v_i8mf4x8_tu(vint8mf4x8_t vd, const int8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i8mf4x8_tu(vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei8_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei8_v_i8mf2x8_tu(vint8mf2x8_t vd, const int8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i8mf2x8_tu(vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei8_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei8_v_i8m1x8_tu(vint8m1x8_t vd, const int8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i8m1x8_tu(vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei8_v_i16mf4x8_tu(vint16mf4x8_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei8_v_i16mf4x8_tu(vint16mf4x8_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16mf4x8_tu(vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei8_v_i16mf2x8_tu(vint16mf2x8_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei8_v_i16mf2x8_tu(vint16mf2x8_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16mf2x8_tu(vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei8_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei8_v_i16m1x8_tu(vint16m1x8_t vd, const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16m1x8_tu(vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei8_v_i32mf2x8_tu(vint32mf2x8_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei8_v_i32mf2x8_tu(vint32mf2x8_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i32mf2x8_tu(vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei8_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei8_v_i32m1x8_tu(vint32m1x8_t vd, const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i32m1x8_tu(vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei8_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei8_v_i64m1x8_tu(vint64m1x8_t vd, const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i64m1x8_tu(vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei8_v_u8mf8x8_tu(vuint8mf8x8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei8_v_u8mf8x8_tu(vuint8mf8x8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8mf8x8_tu(vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei8_v_u8mf4x8_tu(vuint8mf4x8_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei8_v_u8mf4x8_tu(vuint8mf4x8_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8mf4x8_tu(vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei8_v_u8mf2x8_tu(vuint8mf2x8_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei8_v_u8mf2x8_tu(vuint8mf2x8_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8mf2x8_tu(vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei8_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei8_v_u8m1x8_tu(vuint8m1x8_t vd, const uint8_t *rs1, + vuint8m1_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u8m1x8_tu(vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei8_v_u16mf4x8_tu(vuint16mf4x8_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei8_v_u16mf4x8_tu(vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16mf4x8_tu(vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei8_v_u16mf2x8_tu(vuint16mf2x8_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei8_v_u16mf2x8_tu(vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16mf2x8_tu(vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei8_v_u16m1x8_tu(vuint16m1x8_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei8_v_u16m1x8_tu(vuint16m1x8_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16m1x8_tu(vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei8_v_u32mf2x8_tu(vuint32mf2x8_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei8_v_u32mf2x8_tu(vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u32mf2x8_tu(vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei8_v_u32m1x8_tu(vuint32m1x8_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei8_v_u32m1x8_tu(vuint32m1x8_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u32m1x8_tu(vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei8_v_u64m1x8_tu(vuint64m1x8_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei8_v_u64m1x8_tu(vuint64m1x8_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u64m1x8_tu(vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei8_v_f16mf4x8_tum(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei8_v_f16mf4x8_tum(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei8_v_f16mf2x8_tum(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei8_v_f16mf2x8_tum(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei8_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei8_v_f16m1x8_tum(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei8_v_f32mf2x8_tum(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei8_v_f32mf2x8_tum(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei8_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei8_v_f32m1x8_tum(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_f32m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei8_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei8_v_f64m1x8_tum(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f64m1x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei8_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei8_v_i8mf8x8_tum(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei8_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei8_v_i8mf4x8_tum(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei8_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei8_v_i8mf2x8_tum(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei8_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei8_v_i8m1x8_tum(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8m1x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei8_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei8_v_i16mf4x8_tum(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei8_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei8_v_i16mf2x8_tum(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei8_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei8_v_i16m1x8_tum(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i16m1x8_tum(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei8_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei8_v_i32mf2x8_tum(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei8_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei8_v_i32m1x8_tum(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i32m1x8_tum(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei8_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei8_v_i64m1x8_tum(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i64m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei8_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei8_v_u8mf8x8_tum(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u8mf8x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei8_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei8_v_u8mf4x8_tum(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u8mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei8_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei8_v_u8mf2x8_tum(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u8mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei8_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei8_v_u8m1x8_tum(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei8_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei8_v_u16mf4x8_tum(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei8_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei8_v_u16mf2x8_tum(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei8_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei8_v_u16m1x8_tum(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei8_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei8_v_u32mf2x8_tum(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u32mf2x8_tum(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei8_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei8_v_u32m1x8_tum(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u32m1x8_tum(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei8_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei8_v_u64m1x8_tum(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u64m1x8_tum(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei8_v_f16mf4x8_tumu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei8_v_f16mf4x8_tumu(vbool64_t vm, + vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei8_v_f16mf2x8_tumu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei8_v_f16mf2x8_tumu(vbool32_t vm, + vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei8_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei8_v_f16m1x8_tumu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei8_v_f32mf2x8_tumu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei8_v_f32mf2x8_tumu(vbool64_t vm, + vfloat32mf2x8_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei8_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei8_v_f32m1x8_tumu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei8_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei8_v_f64m1x8_tumu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei8_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei8_v_i8mf8x8_tumu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei8_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei8_v_i8mf4x8_tumu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei8_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei8_v_i8mf2x8_tumu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei8_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei8_v_i8m1x8_tumu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei8_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei8_v_i16mf4x8_tumu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei8_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei8_v_i16mf2x8_tumu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei8_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei8_v_i16m1x8_tumu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei8_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei8_v_i32mf2x8_tumu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei8_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei8_v_i32m1x8_tumu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei8_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei8_v_i64m1x8_tumu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei8_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei8_v_u8mf8x8_tumu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u8mf8x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei8_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei8_v_u8mf4x8_tumu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u8mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei8_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei8_v_u8mf2x8_tumu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u8mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei8_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei8_v_u8m1x8_tumu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei8_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei8_v_u16mf4x8_tumu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei8_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei8_v_u16mf2x8_tumu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei8_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei8_v_u16m1x8_tumu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei8_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei8_v_u32mf2x8_tumu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u32mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei8_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei8_v_u32m1x8_tumu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u32m1x8_tumu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei8_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei8_v_u64m1x8_tumu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u64m1x8_tumu(vm, vd, rs1, rs2, vl); } -vfloat16mf4x8_t test_vluxseg8ei8_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, const _Float16 *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat16mf4x8_t test_vluxseg8ei8_v_f16mf4x8_mu(vbool64_t vm, vfloat16mf4x8_t vd, + const _Float16 *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16mf2x8_t test_vluxseg8ei8_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, const _Float16 *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat16mf2x8_t test_vluxseg8ei8_v_f16mf2x8_mu(vbool32_t vm, vfloat16mf2x8_t vd, + const _Float16 *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat16m1x8_t test_vluxseg8ei8_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, const _Float16 *rs1, vuint8mf2_t rs2, size_t vl) { +vfloat16m1x8_t test_vluxseg8ei8_v_f16m1x8_mu(vbool16_t vm, vfloat16m1x8_t vd, + const _Float16 *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f16m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32mf2x8_t test_vluxseg8ei8_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, const float *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat32mf2x8_t test_vluxseg8ei8_v_f32mf2x8_mu(vbool64_t vm, vfloat32mf2x8_t vd, + const float *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_f32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vfloat32m1x8_t test_vluxseg8ei8_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, const float *rs1, vuint8mf4_t rs2, size_t vl) { +vfloat32m1x8_t test_vluxseg8ei8_v_f32m1x8_mu(vbool32_t vm, vfloat32m1x8_t vd, + const float *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_f32m1x8_mu(vm, vd, rs1, rs2, vl); } -vfloat64m1x8_t test_vluxseg8ei8_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, const double *rs1, vuint8mf8_t rs2, size_t vl) { +vfloat64m1x8_t test_vluxseg8ei8_v_f64m1x8_mu(vbool64_t vm, vfloat64m1x8_t vd, + const double *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_f64m1x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf8x8_t test_vluxseg8ei8_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, const int8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint8mf8x8_t test_vluxseg8ei8_v_i8mf8x8_mu(vbool64_t vm, vint8mf8x8_t vd, + const int8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf4x8_t test_vluxseg8ei8_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, const int8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint8mf4x8_t test_vluxseg8ei8_v_i8mf4x8_mu(vbool32_t vm, vint8mf4x8_t vd, + const int8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint8mf2x8_t test_vluxseg8ei8_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, const int8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint8mf2x8_t test_vluxseg8ei8_v_i8mf2x8_mu(vbool16_t vm, vint8mf2x8_t vd, + const int8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint8m1x8_t test_vluxseg8ei8_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, const int8_t *rs1, vuint8m1_t rs2, size_t vl) { +vint8m1x8_t test_vluxseg8ei8_v_i8m1x8_mu(vbool8_t vm, vint8m1x8_t vd, + const int8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i8m1x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf4x8_t test_vluxseg8ei8_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, const int16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint16mf4x8_t test_vluxseg8ei8_v_i16mf4x8_mu(vbool64_t vm, vint16mf4x8_t vd, + const int16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vint16mf2x8_t test_vluxseg8ei8_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, const int16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint16mf2x8_t test_vluxseg8ei8_v_i16mf2x8_mu(vbool32_t vm, vint16mf2x8_t vd, + const int16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint16m1x8_t test_vluxseg8ei8_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, const int16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vint16m1x8_t test_vluxseg8ei8_v_i16m1x8_mu(vbool16_t vm, vint16m1x8_t vd, + const int16_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i16m1x8_mu(vm, vd, rs1, rs2, vl); } -vint32mf2x8_t test_vluxseg8ei8_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, const int32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint32mf2x8_t test_vluxseg8ei8_v_i32mf2x8_mu(vbool64_t vm, vint32mf2x8_t vd, + const int32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_i32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vint32m1x8_t test_vluxseg8ei8_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, const int32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vint32m1x8_t test_vluxseg8ei8_v_i32m1x8_mu(vbool32_t vm, vint32m1x8_t vd, + const int32_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i32m1x8_mu(vm, vd, rs1, rs2, vl); } -vint64m1x8_t test_vluxseg8ei8_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, const int64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vint64m1x8_t test_vluxseg8ei8_v_i64m1x8_mu(vbool64_t vm, vint64m1x8_t vd, + const int64_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_i64m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf8x8_t test_vluxseg8ei8_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, const uint8_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint8mf8x8_t test_vluxseg8ei8_v_u8mf8x8_mu(vbool64_t vm, vuint8mf8x8_t vd, + const uint8_t *rs1, vuint8mf8_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8mf8x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf4x8_t test_vluxseg8ei8_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, const uint8_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint8mf4x8_t test_vluxseg8ei8_v_u8mf4x8_mu(vbool32_t vm, vuint8mf4x8_t vd, + const uint8_t *rs1, vuint8mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint8mf2x8_t test_vluxseg8ei8_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, const uint8_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint8mf2x8_t test_vluxseg8ei8_v_u8mf2x8_mu(vbool16_t vm, vuint8mf2x8_t vd, + const uint8_t *rs1, vuint8mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint8m1x8_t test_vluxseg8ei8_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, const uint8_t *rs1, vuint8m1_t rs2, size_t vl) { +vuint8m1x8_t test_vluxseg8ei8_v_u8m1x8_mu(vbool8_t vm, vuint8m1x8_t vd, + const uint8_t *rs1, vuint8m1_t rs2, + size_t vl) { return __riscv_vluxseg8ei8_v_u8m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf4x8_t test_vluxseg8ei8_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, const uint16_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint16mf4x8_t test_vluxseg8ei8_v_u16mf4x8_mu(vbool64_t vm, vuint16mf4x8_t vd, + const uint16_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vuint16mf2x8_t test_vluxseg8ei8_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, const uint16_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint16mf2x8_t test_vluxseg8ei8_v_u16mf2x8_mu(vbool32_t vm, vuint16mf2x8_t vd, + const uint16_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint16m1x8_t test_vluxseg8ei8_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, const uint16_t *rs1, vuint8mf2_t rs2, size_t vl) { +vuint16m1x8_t test_vluxseg8ei8_v_u16m1x8_mu(vbool16_t vm, vuint16m1x8_t vd, + const uint16_t *rs1, + vuint8mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u16m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint32mf2x8_t test_vluxseg8ei8_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, const uint32_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint32mf2x8_t test_vluxseg8ei8_v_u32mf2x8_mu(vbool64_t vm, vuint32mf2x8_t vd, + const uint32_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u32mf2x8_mu(vm, vd, rs1, rs2, vl); } -vuint32m1x8_t test_vluxseg8ei8_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, const uint32_t *rs1, vuint8mf4_t rs2, size_t vl) { +vuint32m1x8_t test_vluxseg8ei8_v_u32m1x8_mu(vbool32_t vm, vuint32m1x8_t vd, + const uint32_t *rs1, + vuint8mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u32m1x8_mu(vm, vd, rs1, rs2, vl); } -vuint64m1x8_t test_vluxseg8ei8_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, const uint64_t *rs1, vuint8mf8_t rs2, size_t vl) { +vuint64m1x8_t test_vluxseg8ei8_v_u64m1x8_mu(vbool64_t vm, vuint64m1x8_t vd, + const uint64_t *rs1, + vuint8mf8_t rs2, size_t vl) { return __riscv_vluxseg8ei8_v_u64m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vmacc.c index 213a706e7..224679222 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmacc.c @@ -6,1410 +6,1828 @@ #include -vint8mf8_t test_vmacc_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmacc_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vmacc_vv_i8mf8_tu(vd, vs1, vs2, vl); } -vint8mf8_t test_vmacc_vx_i8mf8_tu(vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmacc_vx_i8mf8_tu(vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vmacc_vx_i8mf8_tu(vd, rs1, vs2, vl); } -vint8mf4_t test_vmacc_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmacc_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_i8mf4_tu(vd, vs1, vs2, vl); } -vint8mf4_t test_vmacc_vx_i8mf4_tu(vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmacc_vx_i8mf4_tu(vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vmacc_vx_i8mf4_tu(vd, rs1, vs2, vl); } -vint8mf2_t test_vmacc_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmacc_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i8mf2_tu(vd, vs1, vs2, vl); } -vint8mf2_t test_vmacc_vx_i8mf2_tu(vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmacc_vx_i8mf2_tu(vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vmacc_vx_i8mf2_tu(vd, rs1, vs2, vl); } -vint8m1_t test_vmacc_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmacc_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_i8m1_tu(vd, vs1, vs2, vl); } -vint8m1_t test_vmacc_vx_i8m1_tu(vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmacc_vx_i8m1_tu(vint8m1_t vd, int8_t rs1, vint8m1_t vs2, + size_t vl) { return __riscv_vmacc_vx_i8m1_tu(vd, rs1, vs2, vl); } -vint8m2_t test_vmacc_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmacc_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i8m2_tu(vd, vs1, vs2, vl); } -vint8m2_t test_vmacc_vx_i8m2_tu(vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmacc_vx_i8m2_tu(vint8m2_t vd, int8_t rs1, vint8m2_t vs2, + size_t vl) { return __riscv_vmacc_vx_i8m2_tu(vd, rs1, vs2, vl); } -vint8m4_t test_vmacc_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmacc_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_i8m4_tu(vd, vs1, vs2, vl); } -vint8m4_t test_vmacc_vx_i8m4_tu(vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmacc_vx_i8m4_tu(vint8m4_t vd, int8_t rs1, vint8m4_t vs2, + size_t vl) { return __riscv_vmacc_vx_i8m4_tu(vd, rs1, vs2, vl); } -vint8m8_t test_vmacc_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmacc_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_i8m8_tu(vd, vs1, vs2, vl); } -vint8m8_t test_vmacc_vx_i8m8_tu(vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmacc_vx_i8m8_tu(vint8m8_t vd, int8_t rs1, vint8m8_t vs2, + size_t vl) { return __riscv_vmacc_vx_i8m8_tu(vd, rs1, vs2, vl); } -vint16mf4_t test_vmacc_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmacc_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmacc_vv_i16mf4_tu(vd, vs1, vs2, vl); } -vint16mf4_t test_vmacc_vx_i16mf4_tu(vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmacc_vx_i16mf4_tu(vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_i16mf4_tu(vd, rs1, vs2, vl); } -vint16mf2_t test_vmacc_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmacc_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmacc_vv_i16mf2_tu(vd, vs1, vs2, vl); } -vint16mf2_t test_vmacc_vx_i16mf2_tu(vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmacc_vx_i16mf2_tu(vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i16mf2_tu(vd, rs1, vs2, vl); } -vint16m1_t test_vmacc_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmacc_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16m1_tu(vd, vs1, vs2, vl); } -vint16m1_t test_vmacc_vx_i16m1_tu(vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmacc_vx_i16m1_tu(vint16m1_t vd, int16_t rs1, vint16m1_t vs2, + size_t vl) { return __riscv_vmacc_vx_i16m1_tu(vd, rs1, vs2, vl); } -vint16m2_t test_vmacc_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmacc_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16m2_tu(vd, vs1, vs2, vl); } -vint16m2_t test_vmacc_vx_i16m2_tu(vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmacc_vx_i16m2_tu(vint16m2_t vd, int16_t rs1, vint16m2_t vs2, + size_t vl) { return __riscv_vmacc_vx_i16m2_tu(vd, rs1, vs2, vl); } -vint16m4_t test_vmacc_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmacc_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16m4_tu(vd, vs1, vs2, vl); } -vint16m4_t test_vmacc_vx_i16m4_tu(vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmacc_vx_i16m4_tu(vint16m4_t vd, int16_t rs1, vint16m4_t vs2, + size_t vl) { return __riscv_vmacc_vx_i16m4_tu(vd, rs1, vs2, vl); } -vint16m8_t test_vmacc_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmacc_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16m8_tu(vd, vs1, vs2, vl); } -vint16m8_t test_vmacc_vx_i16m8_tu(vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmacc_vx_i16m8_tu(vint16m8_t vd, int16_t rs1, vint16m8_t vs2, + size_t vl) { return __riscv_vmacc_vx_i16m8_tu(vd, rs1, vs2, vl); } -vint32mf2_t test_vmacc_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmacc_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmacc_vv_i32mf2_tu(vd, vs1, vs2, vl); } -vint32mf2_t test_vmacc_vx_i32mf2_tu(vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmacc_vx_i32mf2_tu(vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i32mf2_tu(vd, rs1, vs2, vl); } -vint32m1_t test_vmacc_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmacc_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_i32m1_tu(vd, vs1, vs2, vl); } -vint32m1_t test_vmacc_vx_i32m1_tu(vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmacc_vx_i32m1_tu(vint32m1_t vd, int32_t rs1, vint32m1_t vs2, + size_t vl) { return __riscv_vmacc_vx_i32m1_tu(vd, rs1, vs2, vl); } -vint32m2_t test_vmacc_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmacc_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i32m2_tu(vd, vs1, vs2, vl); } -vint32m2_t test_vmacc_vx_i32m2_tu(vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmacc_vx_i32m2_tu(vint32m2_t vd, int32_t rs1, vint32m2_t vs2, + size_t vl) { return __riscv_vmacc_vx_i32m2_tu(vd, rs1, vs2, vl); } -vint32m4_t test_vmacc_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmacc_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_i32m4_tu(vd, vs1, vs2, vl); } -vint32m4_t test_vmacc_vx_i32m4_tu(vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmacc_vx_i32m4_tu(vint32m4_t vd, int32_t rs1, vint32m4_t vs2, + size_t vl) { return __riscv_vmacc_vx_i32m4_tu(vd, rs1, vs2, vl); } -vint32m8_t test_vmacc_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmacc_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_i32m8_tu(vd, vs1, vs2, vl); } -vint32m8_t test_vmacc_vx_i32m8_tu(vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmacc_vx_i32m8_tu(vint32m8_t vd, int32_t rs1, vint32m8_t vs2, + size_t vl) { return __riscv_vmacc_vx_i32m8_tu(vd, rs1, vs2, vl); } -vint64m1_t test_vmacc_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmacc_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_i64m1_tu(vd, vs1, vs2, vl); } -vint64m1_t test_vmacc_vx_i64m1_tu(vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmacc_vx_i64m1_tu(vint64m1_t vd, int64_t rs1, vint64m1_t vs2, + size_t vl) { return __riscv_vmacc_vx_i64m1_tu(vd, rs1, vs2, vl); } -vint64m2_t test_vmacc_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmacc_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i64m2_tu(vd, vs1, vs2, vl); } -vint64m2_t test_vmacc_vx_i64m2_tu(vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmacc_vx_i64m2_tu(vint64m2_t vd, int64_t rs1, vint64m2_t vs2, + size_t vl) { return __riscv_vmacc_vx_i64m2_tu(vd, rs1, vs2, vl); } -vint64m4_t test_vmacc_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmacc_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_i64m4_tu(vd, vs1, vs2, vl); } -vint64m4_t test_vmacc_vx_i64m4_tu(vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmacc_vx_i64m4_tu(vint64m4_t vd, int64_t rs1, vint64m4_t vs2, + size_t vl) { return __riscv_vmacc_vx_i64m4_tu(vd, rs1, vs2, vl); } -vint64m8_t test_vmacc_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmacc_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_i64m8_tu(vd, vs1, vs2, vl); } -vint64m8_t test_vmacc_vx_i64m8_tu(vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmacc_vx_i64m8_tu(vint64m8_t vd, int64_t rs1, vint64m8_t vs2, + size_t vl) { return __riscv_vmacc_vx_i64m8_tu(vd, rs1, vs2, vl); } -vuint8mf8_t test_vmacc_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmacc_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vv_u8mf8_tu(vd, vs1, vs2, vl); } -vuint8mf8_t test_vmacc_vx_u8mf8_tu(vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmacc_vx_u8mf8_tu(vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vmacc_vx_u8mf8_tu(vd, rs1, vs2, vl); } -vuint8mf4_t test_vmacc_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmacc_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vv_u8mf4_tu(vd, vs1, vs2, vl); } -vuint8mf4_t test_vmacc_vx_u8mf4_tu(vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmacc_vx_u8mf4_tu(vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vmacc_vx_u8mf4_tu(vd, rs1, vs2, vl); } -vuint8mf2_t test_vmacc_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmacc_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vv_u8mf2_tu(vd, vs1, vs2, vl); } -vuint8mf2_t test_vmacc_vx_u8mf2_tu(vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmacc_vx_u8mf2_tu(vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vmacc_vx_u8mf2_tu(vd, rs1, vs2, vl); } -vuint8m1_t test_vmacc_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmacc_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8m1_tu(vd, vs1, vs2, vl); } -vuint8m1_t test_vmacc_vx_u8m1_tu(vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmacc_vx_u8m1_tu(vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vmacc_vx_u8m1_tu(vd, rs1, vs2, vl); } -vuint8m2_t test_vmacc_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmacc_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8m2_tu(vd, vs1, vs2, vl); } -vuint8m2_t test_vmacc_vx_u8m2_tu(vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmacc_vx_u8m2_tu(vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vmacc_vx_u8m2_tu(vd, rs1, vs2, vl); } -vuint8m4_t test_vmacc_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmacc_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8m4_tu(vd, vs1, vs2, vl); } -vuint8m4_t test_vmacc_vx_u8m4_tu(vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmacc_vx_u8m4_tu(vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vmacc_vx_u8m4_tu(vd, rs1, vs2, vl); } -vuint8m8_t test_vmacc_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmacc_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8m8_tu(vd, vs1, vs2, vl); } -vuint8m8_t test_vmacc_vx_u8m8_tu(vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmacc_vx_u8m8_tu(vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, + size_t vl) { return __riscv_vmacc_vx_u8m8_tu(vd, rs1, vs2, vl); } -vuint16mf4_t test_vmacc_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmacc_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vmacc_vv_u16mf4_tu(vd, vs1, vs2, vl); } -vuint16mf4_t test_vmacc_vx_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmacc_vx_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_u16mf4_tu(vd, rs1, vs2, vl); } -vuint16mf2_t test_vmacc_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmacc_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vmacc_vv_u16mf2_tu(vd, vs1, vs2, vl); } -vuint16mf2_t test_vmacc_vx_u16mf2_tu(vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmacc_vx_u16mf2_tu(vuint16mf2_t vd, uint16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_u16mf2_tu(vd, rs1, vs2, vl); } -vuint16m1_t test_vmacc_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmacc_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmacc_vv_u16m1_tu(vd, vs1, vs2, vl); } -vuint16m1_t test_vmacc_vx_u16m1_tu(vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmacc_vx_u16m1_tu(vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m1_tu(vd, rs1, vs2, vl); } -vuint16m2_t test_vmacc_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmacc_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmacc_vv_u16m2_tu(vd, vs1, vs2, vl); } -vuint16m2_t test_vmacc_vx_u16m2_tu(vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmacc_vx_u16m2_tu(vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m2_tu(vd, rs1, vs2, vl); } -vuint16m4_t test_vmacc_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmacc_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmacc_vv_u16m4_tu(vd, vs1, vs2, vl); } -vuint16m4_t test_vmacc_vx_u16m4_tu(vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmacc_vx_u16m4_tu(vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m4_tu(vd, rs1, vs2, vl); } -vuint16m8_t test_vmacc_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmacc_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u16m8_tu(vd, vs1, vs2, vl); } -vuint16m8_t test_vmacc_vx_u16m8_tu(vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmacc_vx_u16m8_tu(vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m8_tu(vd, rs1, vs2, vl); } -vuint32mf2_t test_vmacc_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmacc_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vmacc_vv_u32mf2_tu(vd, vs1, vs2, vl); } -vuint32mf2_t test_vmacc_vx_u32mf2_tu(vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmacc_vx_u32mf2_tu(vuint32mf2_t vd, uint32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_u32mf2_tu(vd, rs1, vs2, vl); } -vuint32m1_t test_vmacc_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmacc_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmacc_vv_u32m1_tu(vd, vs1, vs2, vl); } -vuint32m1_t test_vmacc_vx_u32m1_tu(vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmacc_vx_u32m1_tu(vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m1_tu(vd, rs1, vs2, vl); } -vuint32m2_t test_vmacc_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmacc_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmacc_vv_u32m2_tu(vd, vs1, vs2, vl); } -vuint32m2_t test_vmacc_vx_u32m2_tu(vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmacc_vx_u32m2_tu(vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m2_tu(vd, rs1, vs2, vl); } -vuint32m4_t test_vmacc_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmacc_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmacc_vv_u32m4_tu(vd, vs1, vs2, vl); } -vuint32m4_t test_vmacc_vx_u32m4_tu(vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmacc_vx_u32m4_tu(vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m4_tu(vd, rs1, vs2, vl); } -vuint32m8_t test_vmacc_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmacc_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u32m8_tu(vd, vs1, vs2, vl); } -vuint32m8_t test_vmacc_vx_u32m8_tu(vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmacc_vx_u32m8_tu(vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m8_tu(vd, rs1, vs2, vl); } -vuint64m1_t test_vmacc_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmacc_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmacc_vv_u64m1_tu(vd, vs1, vs2, vl); } -vuint64m1_t test_vmacc_vx_u64m1_tu(vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmacc_vx_u64m1_tu(vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m1_tu(vd, rs1, vs2, vl); } -vuint64m2_t test_vmacc_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmacc_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmacc_vv_u64m2_tu(vd, vs1, vs2, vl); } -vuint64m2_t test_vmacc_vx_u64m2_tu(vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmacc_vx_u64m2_tu(vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m2_tu(vd, rs1, vs2, vl); } -vuint64m4_t test_vmacc_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmacc_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmacc_vv_u64m4_tu(vd, vs1, vs2, vl); } -vuint64m4_t test_vmacc_vx_u64m4_tu(vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmacc_vx_u64m4_tu(vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m4_tu(vd, rs1, vs2, vl); } -vuint64m8_t test_vmacc_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmacc_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u64m8_tu(vd, vs1, vs2, vl); } -vuint64m8_t test_vmacc_vx_u64m8_tu(vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmacc_vx_u64m8_tu(vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m8_tu(vd, rs1, vs2, vl); } -vint8mf8_t test_vmacc_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmacc_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf8_tum(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vmacc_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmacc_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf8_tum(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vmacc_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmacc_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf4_tum(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vmacc_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmacc_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf4_tum(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vmacc_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmacc_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf2_tum(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vmacc_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmacc_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf2_tum(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vmacc_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmacc_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m1_tum(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vmacc_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmacc_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m1_tum(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vmacc_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmacc_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m2_tum(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vmacc_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmacc_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m2_tum(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vmacc_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmacc_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m4_tum(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vmacc_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmacc_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m4_tum(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vmacc_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmacc_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m8_tum(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vmacc_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmacc_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m8_tum(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vmacc_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmacc_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16mf4_tum(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vmacc_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmacc_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_i16mf4_tum(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vmacc_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmacc_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16mf2_tum(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vmacc_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmacc_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i16mf2_tum(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vmacc_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmacc_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m1_tum(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vmacc_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmacc_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m1_tum(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vmacc_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmacc_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m2_tum(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vmacc_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmacc_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m2_tum(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vmacc_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmacc_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m4_tum(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vmacc_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmacc_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m4_tum(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vmacc_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmacc_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m8_tum(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vmacc_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmacc_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m8_tum(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vmacc_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmacc_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i32mf2_tum(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vmacc_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmacc_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i32mf2_tum(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vmacc_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmacc_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m1_tum(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vmacc_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmacc_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m1_tum(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vmacc_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmacc_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m2_tum(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vmacc_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmacc_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m2_tum(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vmacc_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmacc_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m4_tum(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vmacc_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmacc_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m4_tum(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vmacc_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmacc_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m8_tum(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vmacc_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmacc_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m8_tum(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vmacc_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmacc_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m1_tum(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vmacc_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmacc_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m1_tum(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vmacc_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmacc_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m2_tum(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vmacc_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmacc_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m2_tum(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vmacc_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmacc_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m4_tum(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vmacc_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmacc_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m4_tum(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vmacc_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmacc_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m8_tum(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vmacc_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmacc_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m8_tum(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vmacc_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmacc_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf8_tum(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vmacc_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmacc_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf8_tum(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vmacc_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmacc_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf4_tum(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vmacc_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmacc_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf4_tum(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vmacc_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmacc_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf2_tum(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vmacc_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmacc_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf2_tum(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vmacc_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmacc_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m1_tum(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vmacc_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmacc_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m1_tum(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vmacc_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmacc_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m2_tum(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vmacc_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmacc_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m2_tum(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vmacc_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmacc_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m4_tum(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vmacc_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmacc_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m4_tum(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vmacc_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmacc_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m8_tum(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vmacc_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmacc_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m8_tum(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vmacc_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmacc_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16mf4_tum(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vmacc_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmacc_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vx_u16mf4_tum(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vmacc_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmacc_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16mf2_tum(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vmacc_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmacc_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vx_u16mf2_tum(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vmacc_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmacc_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m1_tum(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vmacc_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmacc_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m1_tum(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vmacc_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmacc_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m2_tum(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vmacc_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmacc_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m2_tum(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vmacc_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmacc_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m4_tum(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vmacc_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmacc_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m4_tum(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vmacc_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmacc_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m8_tum(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vmacc_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmacc_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m8_tum(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vmacc_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmacc_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32mf2_tum(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vmacc_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmacc_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vx_u32mf2_tum(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vmacc_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmacc_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m1_tum(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vmacc_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmacc_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m1_tum(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vmacc_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmacc_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m2_tum(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vmacc_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmacc_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m2_tum(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vmacc_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmacc_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m4_tum(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vmacc_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmacc_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m4_tum(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vmacc_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmacc_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m8_tum(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vmacc_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmacc_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m8_tum(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vmacc_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmacc_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m1_tum(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vmacc_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmacc_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m1_tum(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vmacc_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmacc_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m2_tum(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vmacc_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmacc_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m2_tum(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vmacc_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmacc_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m4_tum(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vmacc_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmacc_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m4_tum(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vmacc_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmacc_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m8_tum(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vmacc_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmacc_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m8_tum(vm, vd, rs1, vs2, vl); } -vint8mf8_t test_vmacc_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmacc_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf8_tumu(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vmacc_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmacc_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf8_tumu(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vmacc_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmacc_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf4_tumu(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vmacc_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmacc_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf4_tumu(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vmacc_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmacc_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf2_tumu(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vmacc_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmacc_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf2_tumu(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vmacc_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmacc_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m1_tumu(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vmacc_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmacc_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m1_tumu(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vmacc_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmacc_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m2_tumu(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vmacc_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmacc_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m2_tumu(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vmacc_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmacc_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m4_tumu(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vmacc_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmacc_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m4_tumu(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vmacc_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmacc_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m8_tumu(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vmacc_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmacc_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m8_tumu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vmacc_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmacc_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16mf4_tumu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vmacc_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmacc_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_i16mf4_tumu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vmacc_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmacc_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16mf2_tumu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vmacc_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmacc_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i16mf2_tumu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vmacc_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmacc_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m1_tumu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vmacc_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmacc_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m1_tumu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vmacc_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmacc_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m2_tumu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vmacc_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmacc_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m2_tumu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vmacc_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmacc_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m4_tumu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vmacc_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmacc_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m4_tumu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vmacc_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmacc_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m8_tumu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vmacc_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmacc_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m8_tumu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vmacc_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmacc_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i32mf2_tumu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vmacc_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmacc_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i32mf2_tumu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vmacc_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmacc_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m1_tumu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vmacc_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmacc_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m1_tumu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vmacc_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmacc_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m2_tumu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vmacc_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmacc_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m2_tumu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vmacc_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmacc_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m4_tumu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vmacc_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmacc_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m4_tumu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vmacc_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmacc_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m8_tumu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vmacc_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmacc_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m8_tumu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vmacc_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmacc_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m1_tumu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vmacc_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmacc_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m1_tumu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vmacc_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmacc_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m2_tumu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vmacc_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmacc_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m2_tumu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vmacc_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmacc_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m4_tumu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vmacc_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmacc_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m4_tumu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vmacc_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmacc_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m8_tumu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vmacc_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmacc_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m8_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vmacc_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmacc_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf8_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vmacc_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmacc_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf8_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vmacc_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmacc_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vmacc_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmacc_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vmacc_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmacc_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vmacc_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmacc_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vmacc_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmacc_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m1_tumu(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vmacc_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmacc_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m1_tumu(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vmacc_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmacc_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m2_tumu(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vmacc_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmacc_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m2_tumu(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vmacc_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmacc_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m4_tumu(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vmacc_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmacc_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m4_tumu(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vmacc_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmacc_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m8_tumu(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vmacc_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmacc_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m8_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vmacc_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmacc_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vmacc_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmacc_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vx_u16mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vmacc_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmacc_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vmacc_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmacc_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vx_u16mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vmacc_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmacc_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m1_tumu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vmacc_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmacc_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m1_tumu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vmacc_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmacc_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m2_tumu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vmacc_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmacc_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vmacc_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmacc_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m4_tumu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vmacc_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmacc_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m4_tumu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vmacc_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmacc_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m8_tumu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vmacc_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmacc_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m8_tumu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vmacc_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmacc_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vmacc_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmacc_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vx_u32mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vmacc_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmacc_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m1_tumu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vmacc_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmacc_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m1_tumu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vmacc_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmacc_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m2_tumu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vmacc_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmacc_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vmacc_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmacc_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m4_tumu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vmacc_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmacc_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m4_tumu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vmacc_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmacc_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m8_tumu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vmacc_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmacc_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m8_tumu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vmacc_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmacc_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m1_tumu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vmacc_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmacc_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m1_tumu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vmacc_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmacc_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m2_tumu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vmacc_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmacc_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m2_tumu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vmacc_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmacc_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m4_tumu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vmacc_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmacc_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m4_tumu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vmacc_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmacc_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m8_tumu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vmacc_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmacc_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m8_tumu(vm, vd, rs1, vs2, vl); } -vint8mf8_t test_vmacc_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmacc_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf8_mu(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vmacc_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmacc_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf8_mu(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vmacc_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmacc_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf4_mu(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vmacc_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmacc_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf4_mu(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vmacc_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmacc_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vv_i8mf2_mu(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vmacc_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmacc_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i8mf2_mu(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vmacc_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmacc_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m1_mu(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vmacc_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmacc_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m1_mu(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vmacc_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmacc_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m2_mu(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vmacc_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmacc_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m2_mu(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vmacc_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmacc_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m4_mu(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vmacc_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmacc_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m4_mu(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vmacc_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmacc_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i8m8_mu(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vmacc_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmacc_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i8m8_mu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vmacc_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmacc_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16mf4_mu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vmacc_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmacc_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_i16mf4_mu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vmacc_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmacc_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i16mf2_mu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vmacc_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmacc_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i16mf2_mu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vmacc_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmacc_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m1_mu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vmacc_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmacc_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m1_mu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vmacc_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmacc_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m2_mu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vmacc_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmacc_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m2_mu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vmacc_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmacc_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m4_mu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vmacc_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmacc_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m4_mu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vmacc_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmacc_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i16m8_mu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vmacc_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmacc_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i16m8_mu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vmacc_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmacc_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_i32mf2_mu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vmacc_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmacc_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_i32mf2_mu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vmacc_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmacc_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m1_mu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vmacc_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmacc_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m1_mu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vmacc_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmacc_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m2_mu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vmacc_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmacc_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m2_mu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vmacc_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmacc_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m4_mu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vmacc_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmacc_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m4_mu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vmacc_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmacc_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i32m8_mu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vmacc_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmacc_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i32m8_mu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vmacc_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmacc_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m1_mu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vmacc_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmacc_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m1_mu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vmacc_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmacc_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m2_mu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vmacc_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmacc_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m2_mu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vmacc_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmacc_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m4_mu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vmacc_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmacc_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m4_mu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vmacc_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmacc_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmacc_vv_i64m8_mu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vmacc_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmacc_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmacc_vx_i64m8_mu(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vmacc_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmacc_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf8_mu(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vmacc_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmacc_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf8_mu(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vmacc_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmacc_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf4_mu(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vmacc_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmacc_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf4_mu(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vmacc_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmacc_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u8mf2_mu(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vmacc_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmacc_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vmacc_vx_u8mf2_mu(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vmacc_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmacc_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m1_mu(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vmacc_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmacc_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m1_mu(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vmacc_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmacc_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m2_mu(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vmacc_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmacc_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m2_mu(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vmacc_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmacc_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m4_mu(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vmacc_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmacc_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m4_mu(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vmacc_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmacc_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u8m8_mu(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vmacc_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmacc_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u8m8_mu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vmacc_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmacc_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16mf4_mu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vmacc_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmacc_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmacc_vx_u16mf4_mu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vmacc_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmacc_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16mf2_mu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vmacc_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmacc_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmacc_vx_u16mf2_mu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vmacc_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmacc_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u16m1_mu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vmacc_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmacc_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m1_mu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vmacc_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmacc_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmacc_vv_u16m2_mu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vmacc_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmacc_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m2_mu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vmacc_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmacc_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmacc_vv_u16m4_mu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vmacc_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmacc_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m4_mu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vmacc_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmacc_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u16m8_mu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vmacc_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmacc_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u16m8_mu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vmacc_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmacc_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32mf2_mu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vmacc_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmacc_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmacc_vx_u32mf2_mu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vmacc_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmacc_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m1_mu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vmacc_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmacc_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m1_mu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vmacc_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmacc_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u32m2_mu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vmacc_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmacc_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m2_mu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vmacc_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmacc_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmacc_vv_u32m4_mu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vmacc_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmacc_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m4_mu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vmacc_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmacc_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u32m8_mu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vmacc_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmacc_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u32m8_mu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vmacc_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmacc_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m1_mu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vmacc_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmacc_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m1_mu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vmacc_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmacc_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m2_mu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vmacc_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmacc_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m2_mu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vmacc_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmacc_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vmacc_vv_u64m4_mu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vmacc_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmacc_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m4_mu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vmacc_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmacc_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmacc_vv_u64m8_mu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vmacc_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmacc_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmacc_vx_u64m8_mu(vm, vd, rs1, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmadd.c b/auto-generated/policy_funcs/llvm-api-tests/vmadd.c index f63e03c22..8cbd12ce1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmadd.c @@ -6,1410 +6,1828 @@ #include -vint8mf8_t test_vmadd_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmadd_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vmadd_vv_i8mf8_tu(vd, vs1, vs2, vl); } -vint8mf8_t test_vmadd_vx_i8mf8_tu(vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmadd_vx_i8mf8_tu(vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vmadd_vx_i8mf8_tu(vd, rs1, vs2, vl); } -vint8mf4_t test_vmadd_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmadd_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_i8mf4_tu(vd, vs1, vs2, vl); } -vint8mf4_t test_vmadd_vx_i8mf4_tu(vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmadd_vx_i8mf4_tu(vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vmadd_vx_i8mf4_tu(vd, rs1, vs2, vl); } -vint8mf2_t test_vmadd_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmadd_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i8mf2_tu(vd, vs1, vs2, vl); } -vint8mf2_t test_vmadd_vx_i8mf2_tu(vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmadd_vx_i8mf2_tu(vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vmadd_vx_i8mf2_tu(vd, rs1, vs2, vl); } -vint8m1_t test_vmadd_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmadd_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_i8m1_tu(vd, vs1, vs2, vl); } -vint8m1_t test_vmadd_vx_i8m1_tu(vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmadd_vx_i8m1_tu(vint8m1_t vd, int8_t rs1, vint8m1_t vs2, + size_t vl) { return __riscv_vmadd_vx_i8m1_tu(vd, rs1, vs2, vl); } -vint8m2_t test_vmadd_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmadd_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i8m2_tu(vd, vs1, vs2, vl); } -vint8m2_t test_vmadd_vx_i8m2_tu(vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmadd_vx_i8m2_tu(vint8m2_t vd, int8_t rs1, vint8m2_t vs2, + size_t vl) { return __riscv_vmadd_vx_i8m2_tu(vd, rs1, vs2, vl); } -vint8m4_t test_vmadd_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmadd_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_i8m4_tu(vd, vs1, vs2, vl); } -vint8m4_t test_vmadd_vx_i8m4_tu(vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmadd_vx_i8m4_tu(vint8m4_t vd, int8_t rs1, vint8m4_t vs2, + size_t vl) { return __riscv_vmadd_vx_i8m4_tu(vd, rs1, vs2, vl); } -vint8m8_t test_vmadd_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmadd_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_i8m8_tu(vd, vs1, vs2, vl); } -vint8m8_t test_vmadd_vx_i8m8_tu(vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmadd_vx_i8m8_tu(vint8m8_t vd, int8_t rs1, vint8m8_t vs2, + size_t vl) { return __riscv_vmadd_vx_i8m8_tu(vd, rs1, vs2, vl); } -vint16mf4_t test_vmadd_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmadd_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmadd_vv_i16mf4_tu(vd, vs1, vs2, vl); } -vint16mf4_t test_vmadd_vx_i16mf4_tu(vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmadd_vx_i16mf4_tu(vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_i16mf4_tu(vd, rs1, vs2, vl); } -vint16mf2_t test_vmadd_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmadd_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmadd_vv_i16mf2_tu(vd, vs1, vs2, vl); } -vint16mf2_t test_vmadd_vx_i16mf2_tu(vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmadd_vx_i16mf2_tu(vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i16mf2_tu(vd, rs1, vs2, vl); } -vint16m1_t test_vmadd_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmadd_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16m1_tu(vd, vs1, vs2, vl); } -vint16m1_t test_vmadd_vx_i16m1_tu(vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmadd_vx_i16m1_tu(vint16m1_t vd, int16_t rs1, vint16m1_t vs2, + size_t vl) { return __riscv_vmadd_vx_i16m1_tu(vd, rs1, vs2, vl); } -vint16m2_t test_vmadd_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmadd_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16m2_tu(vd, vs1, vs2, vl); } -vint16m2_t test_vmadd_vx_i16m2_tu(vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmadd_vx_i16m2_tu(vint16m2_t vd, int16_t rs1, vint16m2_t vs2, + size_t vl) { return __riscv_vmadd_vx_i16m2_tu(vd, rs1, vs2, vl); } -vint16m4_t test_vmadd_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmadd_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16m4_tu(vd, vs1, vs2, vl); } -vint16m4_t test_vmadd_vx_i16m4_tu(vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmadd_vx_i16m4_tu(vint16m4_t vd, int16_t rs1, vint16m4_t vs2, + size_t vl) { return __riscv_vmadd_vx_i16m4_tu(vd, rs1, vs2, vl); } -vint16m8_t test_vmadd_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmadd_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16m8_tu(vd, vs1, vs2, vl); } -vint16m8_t test_vmadd_vx_i16m8_tu(vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmadd_vx_i16m8_tu(vint16m8_t vd, int16_t rs1, vint16m8_t vs2, + size_t vl) { return __riscv_vmadd_vx_i16m8_tu(vd, rs1, vs2, vl); } -vint32mf2_t test_vmadd_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmadd_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmadd_vv_i32mf2_tu(vd, vs1, vs2, vl); } -vint32mf2_t test_vmadd_vx_i32mf2_tu(vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmadd_vx_i32mf2_tu(vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i32mf2_tu(vd, rs1, vs2, vl); } -vint32m1_t test_vmadd_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmadd_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_i32m1_tu(vd, vs1, vs2, vl); } -vint32m1_t test_vmadd_vx_i32m1_tu(vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmadd_vx_i32m1_tu(vint32m1_t vd, int32_t rs1, vint32m1_t vs2, + size_t vl) { return __riscv_vmadd_vx_i32m1_tu(vd, rs1, vs2, vl); } -vint32m2_t test_vmadd_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmadd_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i32m2_tu(vd, vs1, vs2, vl); } -vint32m2_t test_vmadd_vx_i32m2_tu(vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmadd_vx_i32m2_tu(vint32m2_t vd, int32_t rs1, vint32m2_t vs2, + size_t vl) { return __riscv_vmadd_vx_i32m2_tu(vd, rs1, vs2, vl); } -vint32m4_t test_vmadd_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmadd_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_i32m4_tu(vd, vs1, vs2, vl); } -vint32m4_t test_vmadd_vx_i32m4_tu(vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmadd_vx_i32m4_tu(vint32m4_t vd, int32_t rs1, vint32m4_t vs2, + size_t vl) { return __riscv_vmadd_vx_i32m4_tu(vd, rs1, vs2, vl); } -vint32m8_t test_vmadd_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmadd_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_i32m8_tu(vd, vs1, vs2, vl); } -vint32m8_t test_vmadd_vx_i32m8_tu(vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmadd_vx_i32m8_tu(vint32m8_t vd, int32_t rs1, vint32m8_t vs2, + size_t vl) { return __riscv_vmadd_vx_i32m8_tu(vd, rs1, vs2, vl); } -vint64m1_t test_vmadd_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmadd_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_i64m1_tu(vd, vs1, vs2, vl); } -vint64m1_t test_vmadd_vx_i64m1_tu(vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmadd_vx_i64m1_tu(vint64m1_t vd, int64_t rs1, vint64m1_t vs2, + size_t vl) { return __riscv_vmadd_vx_i64m1_tu(vd, rs1, vs2, vl); } -vint64m2_t test_vmadd_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmadd_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i64m2_tu(vd, vs1, vs2, vl); } -vint64m2_t test_vmadd_vx_i64m2_tu(vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmadd_vx_i64m2_tu(vint64m2_t vd, int64_t rs1, vint64m2_t vs2, + size_t vl) { return __riscv_vmadd_vx_i64m2_tu(vd, rs1, vs2, vl); } -vint64m4_t test_vmadd_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmadd_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_i64m4_tu(vd, vs1, vs2, vl); } -vint64m4_t test_vmadd_vx_i64m4_tu(vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmadd_vx_i64m4_tu(vint64m4_t vd, int64_t rs1, vint64m4_t vs2, + size_t vl) { return __riscv_vmadd_vx_i64m4_tu(vd, rs1, vs2, vl); } -vint64m8_t test_vmadd_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmadd_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_i64m8_tu(vd, vs1, vs2, vl); } -vint64m8_t test_vmadd_vx_i64m8_tu(vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmadd_vx_i64m8_tu(vint64m8_t vd, int64_t rs1, vint64m8_t vs2, + size_t vl) { return __riscv_vmadd_vx_i64m8_tu(vd, rs1, vs2, vl); } -vuint8mf8_t test_vmadd_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmadd_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vv_u8mf8_tu(vd, vs1, vs2, vl); } -vuint8mf8_t test_vmadd_vx_u8mf8_tu(vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmadd_vx_u8mf8_tu(vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vmadd_vx_u8mf8_tu(vd, rs1, vs2, vl); } -vuint8mf4_t test_vmadd_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmadd_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vv_u8mf4_tu(vd, vs1, vs2, vl); } -vuint8mf4_t test_vmadd_vx_u8mf4_tu(vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmadd_vx_u8mf4_tu(vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vmadd_vx_u8mf4_tu(vd, rs1, vs2, vl); } -vuint8mf2_t test_vmadd_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmadd_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vv_u8mf2_tu(vd, vs1, vs2, vl); } -vuint8mf2_t test_vmadd_vx_u8mf2_tu(vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmadd_vx_u8mf2_tu(vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vmadd_vx_u8mf2_tu(vd, rs1, vs2, vl); } -vuint8m1_t test_vmadd_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmadd_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8m1_tu(vd, vs1, vs2, vl); } -vuint8m1_t test_vmadd_vx_u8m1_tu(vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmadd_vx_u8m1_tu(vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vmadd_vx_u8m1_tu(vd, rs1, vs2, vl); } -vuint8m2_t test_vmadd_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmadd_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8m2_tu(vd, vs1, vs2, vl); } -vuint8m2_t test_vmadd_vx_u8m2_tu(vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmadd_vx_u8m2_tu(vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vmadd_vx_u8m2_tu(vd, rs1, vs2, vl); } -vuint8m4_t test_vmadd_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmadd_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8m4_tu(vd, vs1, vs2, vl); } -vuint8m4_t test_vmadd_vx_u8m4_tu(vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmadd_vx_u8m4_tu(vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vmadd_vx_u8m4_tu(vd, rs1, vs2, vl); } -vuint8m8_t test_vmadd_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmadd_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8m8_tu(vd, vs1, vs2, vl); } -vuint8m8_t test_vmadd_vx_u8m8_tu(vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmadd_vx_u8m8_tu(vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, + size_t vl) { return __riscv_vmadd_vx_u8m8_tu(vd, rs1, vs2, vl); } -vuint16mf4_t test_vmadd_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmadd_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vmadd_vv_u16mf4_tu(vd, vs1, vs2, vl); } -vuint16mf4_t test_vmadd_vx_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmadd_vx_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_u16mf4_tu(vd, rs1, vs2, vl); } -vuint16mf2_t test_vmadd_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmadd_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vmadd_vv_u16mf2_tu(vd, vs1, vs2, vl); } -vuint16mf2_t test_vmadd_vx_u16mf2_tu(vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmadd_vx_u16mf2_tu(vuint16mf2_t vd, uint16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_u16mf2_tu(vd, rs1, vs2, vl); } -vuint16m1_t test_vmadd_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmadd_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmadd_vv_u16m1_tu(vd, vs1, vs2, vl); } -vuint16m1_t test_vmadd_vx_u16m1_tu(vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmadd_vx_u16m1_tu(vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m1_tu(vd, rs1, vs2, vl); } -vuint16m2_t test_vmadd_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmadd_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmadd_vv_u16m2_tu(vd, vs1, vs2, vl); } -vuint16m2_t test_vmadd_vx_u16m2_tu(vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmadd_vx_u16m2_tu(vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m2_tu(vd, rs1, vs2, vl); } -vuint16m4_t test_vmadd_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmadd_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmadd_vv_u16m4_tu(vd, vs1, vs2, vl); } -vuint16m4_t test_vmadd_vx_u16m4_tu(vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmadd_vx_u16m4_tu(vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m4_tu(vd, rs1, vs2, vl); } -vuint16m8_t test_vmadd_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmadd_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u16m8_tu(vd, vs1, vs2, vl); } -vuint16m8_t test_vmadd_vx_u16m8_tu(vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmadd_vx_u16m8_tu(vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m8_tu(vd, rs1, vs2, vl); } -vuint32mf2_t test_vmadd_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmadd_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vmadd_vv_u32mf2_tu(vd, vs1, vs2, vl); } -vuint32mf2_t test_vmadd_vx_u32mf2_tu(vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmadd_vx_u32mf2_tu(vuint32mf2_t vd, uint32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_u32mf2_tu(vd, rs1, vs2, vl); } -vuint32m1_t test_vmadd_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmadd_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmadd_vv_u32m1_tu(vd, vs1, vs2, vl); } -vuint32m1_t test_vmadd_vx_u32m1_tu(vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmadd_vx_u32m1_tu(vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m1_tu(vd, rs1, vs2, vl); } -vuint32m2_t test_vmadd_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmadd_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmadd_vv_u32m2_tu(vd, vs1, vs2, vl); } -vuint32m2_t test_vmadd_vx_u32m2_tu(vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmadd_vx_u32m2_tu(vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m2_tu(vd, rs1, vs2, vl); } -vuint32m4_t test_vmadd_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmadd_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmadd_vv_u32m4_tu(vd, vs1, vs2, vl); } -vuint32m4_t test_vmadd_vx_u32m4_tu(vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmadd_vx_u32m4_tu(vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m4_tu(vd, rs1, vs2, vl); } -vuint32m8_t test_vmadd_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmadd_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u32m8_tu(vd, vs1, vs2, vl); } -vuint32m8_t test_vmadd_vx_u32m8_tu(vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmadd_vx_u32m8_tu(vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m8_tu(vd, rs1, vs2, vl); } -vuint64m1_t test_vmadd_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmadd_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmadd_vv_u64m1_tu(vd, vs1, vs2, vl); } -vuint64m1_t test_vmadd_vx_u64m1_tu(vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmadd_vx_u64m1_tu(vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m1_tu(vd, rs1, vs2, vl); } -vuint64m2_t test_vmadd_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmadd_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmadd_vv_u64m2_tu(vd, vs1, vs2, vl); } -vuint64m2_t test_vmadd_vx_u64m2_tu(vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmadd_vx_u64m2_tu(vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m2_tu(vd, rs1, vs2, vl); } -vuint64m4_t test_vmadd_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmadd_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmadd_vv_u64m4_tu(vd, vs1, vs2, vl); } -vuint64m4_t test_vmadd_vx_u64m4_tu(vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmadd_vx_u64m4_tu(vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m4_tu(vd, rs1, vs2, vl); } -vuint64m8_t test_vmadd_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmadd_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u64m8_tu(vd, vs1, vs2, vl); } -vuint64m8_t test_vmadd_vx_u64m8_tu(vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmadd_vx_u64m8_tu(vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m8_tu(vd, rs1, vs2, vl); } -vint8mf8_t test_vmadd_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmadd_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf8_tum(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vmadd_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmadd_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf8_tum(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vmadd_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmadd_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf4_tum(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vmadd_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmadd_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf4_tum(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vmadd_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmadd_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf2_tum(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vmadd_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmadd_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf2_tum(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vmadd_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmadd_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m1_tum(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vmadd_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmadd_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m1_tum(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vmadd_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmadd_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m2_tum(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vmadd_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmadd_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m2_tum(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vmadd_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmadd_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m4_tum(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vmadd_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmadd_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m4_tum(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vmadd_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmadd_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m8_tum(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vmadd_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmadd_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m8_tum(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vmadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16mf4_tum(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vmadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_i16mf4_tum(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vmadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16mf2_tum(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vmadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i16mf2_tum(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vmadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m1_tum(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vmadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m1_tum(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vmadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m2_tum(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vmadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m2_tum(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vmadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m4_tum(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vmadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m4_tum(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vmadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m8_tum(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vmadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m8_tum(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vmadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i32mf2_tum(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vmadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i32mf2_tum(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vmadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m1_tum(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vmadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m1_tum(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vmadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m2_tum(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vmadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m2_tum(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vmadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m4_tum(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vmadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m4_tum(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vmadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m8_tum(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vmadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m8_tum(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vmadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m1_tum(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vmadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m1_tum(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vmadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m2_tum(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vmadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m2_tum(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vmadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m4_tum(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vmadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m4_tum(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vmadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m8_tum(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vmadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m8_tum(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vmadd_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmadd_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf8_tum(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vmadd_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmadd_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf8_tum(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vmadd_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmadd_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf4_tum(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vmadd_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmadd_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf4_tum(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vmadd_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmadd_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf2_tum(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vmadd_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmadd_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf2_tum(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vmadd_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmadd_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m1_tum(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vmadd_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmadd_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m1_tum(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vmadd_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmadd_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m2_tum(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vmadd_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmadd_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m2_tum(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vmadd_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmadd_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m4_tum(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vmadd_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmadd_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m4_tum(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vmadd_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmadd_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m8_tum(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vmadd_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmadd_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m8_tum(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vmadd_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmadd_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16mf4_tum(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vmadd_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmadd_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vx_u16mf4_tum(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vmadd_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmadd_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16mf2_tum(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vmadd_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmadd_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vx_u16mf2_tum(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vmadd_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmadd_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m1_tum(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vmadd_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmadd_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m1_tum(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vmadd_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmadd_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m2_tum(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vmadd_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmadd_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m2_tum(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vmadd_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmadd_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m4_tum(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vmadd_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmadd_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m4_tum(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vmadd_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmadd_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m8_tum(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vmadd_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmadd_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m8_tum(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vmadd_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmadd_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32mf2_tum(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vmadd_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmadd_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vx_u32mf2_tum(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vmadd_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmadd_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m1_tum(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vmadd_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmadd_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m1_tum(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vmadd_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmadd_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m2_tum(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vmadd_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmadd_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m2_tum(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vmadd_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmadd_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m4_tum(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vmadd_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmadd_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m4_tum(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vmadd_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmadd_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m8_tum(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vmadd_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmadd_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m8_tum(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vmadd_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmadd_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m1_tum(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vmadd_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmadd_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m1_tum(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vmadd_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmadd_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m2_tum(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vmadd_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmadd_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m2_tum(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vmadd_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmadd_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m4_tum(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vmadd_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmadd_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m4_tum(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vmadd_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmadd_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m8_tum(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vmadd_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmadd_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m8_tum(vm, vd, rs1, vs2, vl); } -vint8mf8_t test_vmadd_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmadd_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf8_tumu(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vmadd_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmadd_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf8_tumu(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vmadd_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmadd_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf4_tumu(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vmadd_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmadd_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf4_tumu(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vmadd_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmadd_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf2_tumu(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vmadd_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmadd_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf2_tumu(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vmadd_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmadd_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m1_tumu(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vmadd_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmadd_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m1_tumu(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vmadd_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmadd_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m2_tumu(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vmadd_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmadd_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m2_tumu(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vmadd_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmadd_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m4_tumu(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vmadd_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmadd_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m4_tumu(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vmadd_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmadd_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m8_tumu(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vmadd_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmadd_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m8_tumu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vmadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16mf4_tumu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vmadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_i16mf4_tumu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vmadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16mf2_tumu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vmadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i16mf2_tumu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vmadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m1_tumu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vmadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m1_tumu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vmadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m2_tumu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vmadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m2_tumu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vmadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m4_tumu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vmadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m4_tumu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vmadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m8_tumu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vmadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m8_tumu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vmadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i32mf2_tumu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vmadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i32mf2_tumu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vmadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m1_tumu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vmadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m1_tumu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vmadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m2_tumu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vmadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m2_tumu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vmadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m4_tumu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vmadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m4_tumu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vmadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m8_tumu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vmadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m8_tumu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vmadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m1_tumu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vmadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m1_tumu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vmadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m2_tumu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vmadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m2_tumu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vmadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m4_tumu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vmadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m4_tumu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vmadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m8_tumu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vmadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m8_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vmadd_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmadd_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf8_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vmadd_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmadd_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf8_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vmadd_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmadd_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vmadd_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmadd_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vmadd_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmadd_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vmadd_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmadd_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vmadd_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmadd_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m1_tumu(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vmadd_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmadd_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m1_tumu(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vmadd_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmadd_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m2_tumu(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vmadd_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmadd_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m2_tumu(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vmadd_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmadd_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m4_tumu(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vmadd_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmadd_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m4_tumu(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vmadd_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmadd_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m8_tumu(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vmadd_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmadd_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m8_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vmadd_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmadd_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vmadd_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmadd_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vx_u16mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vmadd_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmadd_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vmadd_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmadd_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vx_u16mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vmadd_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmadd_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m1_tumu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vmadd_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmadd_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m1_tumu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vmadd_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmadd_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m2_tumu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vmadd_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmadd_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vmadd_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmadd_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m4_tumu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vmadd_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmadd_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m4_tumu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vmadd_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmadd_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m8_tumu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vmadd_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmadd_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m8_tumu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vmadd_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmadd_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vmadd_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmadd_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vx_u32mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vmadd_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmadd_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m1_tumu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vmadd_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmadd_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m1_tumu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vmadd_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmadd_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m2_tumu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vmadd_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmadd_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vmadd_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmadd_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m4_tumu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vmadd_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmadd_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m4_tumu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vmadd_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmadd_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m8_tumu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vmadd_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmadd_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m8_tumu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vmadd_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmadd_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m1_tumu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vmadd_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmadd_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m1_tumu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vmadd_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmadd_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m2_tumu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vmadd_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmadd_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m2_tumu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vmadd_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmadd_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m4_tumu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vmadd_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmadd_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m4_tumu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vmadd_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmadd_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m8_tumu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vmadd_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmadd_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m8_tumu(vm, vd, rs1, vs2, vl); } -vint8mf8_t test_vmadd_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmadd_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf8_mu(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vmadd_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vmadd_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf8_mu(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vmadd_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmadd_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf4_mu(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vmadd_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vmadd_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf4_mu(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vmadd_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmadd_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vv_i8mf2_mu(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vmadd_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vmadd_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i8mf2_mu(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vmadd_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmadd_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m1_mu(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vmadd_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vmadd_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m1_mu(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vmadd_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmadd_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m2_mu(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vmadd_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vmadd_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m2_mu(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vmadd_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmadd_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m4_mu(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vmadd_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vmadd_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m4_mu(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vmadd_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmadd_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i8m8_mu(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vmadd_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vmadd_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i8m8_mu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vmadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16mf4_mu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vmadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vmadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_i16mf4_mu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vmadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i16mf2_mu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vmadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vmadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i16mf2_mu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vmadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m1_mu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vmadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vmadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m1_mu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vmadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m2_mu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vmadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vmadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m2_mu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vmadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m4_mu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vmadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vmadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m4_mu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vmadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i16m8_mu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vmadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vmadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i16m8_mu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vmadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_i32mf2_mu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vmadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vmadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_i32mf2_mu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vmadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m1_mu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vmadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vmadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m1_mu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vmadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m2_mu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vmadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vmadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m2_mu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vmadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m4_mu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vmadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vmadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m4_mu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vmadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i32m8_mu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vmadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vmadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i32m8_mu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vmadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m1_mu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vmadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vmadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m1_mu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vmadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m2_mu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vmadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vmadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m2_mu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vmadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m4_mu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vmadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vmadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m4_mu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vmadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmadd_vv_i64m8_mu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vmadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vmadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vmadd_vx_i64m8_mu(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vmadd_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmadd_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf8_mu(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vmadd_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vmadd_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf8_mu(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vmadd_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmadd_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf4_mu(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vmadd_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vmadd_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf4_mu(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vmadd_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmadd_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u8mf2_mu(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vmadd_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vmadd_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vmadd_vx_u8mf2_mu(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vmadd_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmadd_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m1_mu(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vmadd_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vmadd_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m1_mu(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vmadd_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmadd_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m2_mu(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vmadd_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vmadd_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m2_mu(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vmadd_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmadd_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m4_mu(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vmadd_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vmadd_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m4_mu(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vmadd_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmadd_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u8m8_mu(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vmadd_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vmadd_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u8m8_mu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vmadd_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmadd_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16mf4_mu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vmadd_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vmadd_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vmadd_vx_u16mf4_mu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vmadd_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmadd_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16mf2_mu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vmadd_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vmadd_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vmadd_vx_u16mf2_mu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vmadd_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmadd_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u16m1_mu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vmadd_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vmadd_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m1_mu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vmadd_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmadd_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmadd_vv_u16m2_mu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vmadd_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vmadd_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m2_mu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vmadd_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmadd_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmadd_vv_u16m4_mu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vmadd_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vmadd_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m4_mu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vmadd_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmadd_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u16m8_mu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vmadd_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vmadd_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u16m8_mu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vmadd_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmadd_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32mf2_mu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vmadd_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vmadd_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vmadd_vx_u32mf2_mu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vmadd_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmadd_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m1_mu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vmadd_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vmadd_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m1_mu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vmadd_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmadd_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u32m2_mu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vmadd_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vmadd_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m2_mu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vmadd_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmadd_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmadd_vv_u32m4_mu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vmadd_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vmadd_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m4_mu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vmadd_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmadd_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u32m8_mu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vmadd_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vmadd_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u32m8_mu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vmadd_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmadd_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m1_mu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vmadd_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vmadd_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m1_mu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vmadd_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmadd_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m2_mu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vmadd_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vmadd_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m2_mu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vmadd_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmadd_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vmadd_vv_u64m4_mu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vmadd_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vmadd_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m4_mu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vmadd_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmadd_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmadd_vv_u64m8_mu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vmadd_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vmadd_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vmadd_vx_u64m8_mu(vm, vd, rs1, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmax.c b/auto-generated/policy_funcs/llvm-api-tests/vmax.c index 2d4018a4f..48deb0d19 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmax.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmax.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vmax_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmax_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vmax_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vmax_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmax_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmax_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vmax_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmax_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vmax_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vmax_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmax_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmax_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vmax_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmax_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vmax_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vmax_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmax_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmax_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vmax_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmax_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vmax_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vmax_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmax_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmax_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vmax_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmax_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vmax_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vmax_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmax_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmax_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vmax_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmax_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vmax_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vmax_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmax_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmax_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vmax_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmax_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vmax_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vmax_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmax_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmax_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vmax_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmax_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vmax_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vmax_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmax_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmax_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vmax_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmax_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vmax_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vmax_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmax_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmax_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vmax_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmax_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vmax_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vmax_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmax_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmax_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vmax_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmax_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vmax_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vmax_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmax_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmax_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vmax_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmax_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vmax_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vmax_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmax_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmax_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vmax_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmax_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vmax_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vmax_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmax_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmax_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vmax_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmax_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vmax_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vmax_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmax_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmax_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vmax_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmax_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vmax_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vmax_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmax_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmax_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vmax_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmax_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vmax_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vmax_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmax_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmax_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vmax_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmax_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vmax_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vmax_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmax_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmax_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vmax_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmax_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vmax_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vmax_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmax_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmax_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vmax_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmax_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vmax_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vmax_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmax_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmax_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vmax_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmax_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vmax_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vmax_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmax_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmax_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vmax_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmax_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vmax_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vmax_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmax_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmax_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vmax_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmax_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vmax_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vmax_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmax_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmax_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vmax_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmax_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmax_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmax_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmax_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmax_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmax_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmax_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmax_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmax_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmax_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmax_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmax_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmax_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmax_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmax_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmax_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmax_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmax_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmax_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmax_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmax_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmax_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmax_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmax_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmax_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmax_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmax_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmax_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmax_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmax_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmax_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmax_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmax_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmax_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmax_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmax_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmax_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmax_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmax_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmax_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmax_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmax_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmax_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmax_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmax_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmax_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmax_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmax_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmax_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmax_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmax_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmax_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmax_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmax_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmax_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmax_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmax_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmax_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmax_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmax_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmax_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmax_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmax_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmax_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmax_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmax_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmax_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmax_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmax_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmax_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmax_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmax_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmax_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmax_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmax_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmax_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmax_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmax_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmax_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmax_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmax_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmax_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmax_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmax_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmax_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmax_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmax_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmax_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmax_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmax_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmax_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmax_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmax_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmax_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmax_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmax_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmax_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmax_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmax_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmax_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmax_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmax_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmax_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmax_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmax_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmax_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmax_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmax_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmax_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmax_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmax_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmax_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmax_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmax_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmax_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmax_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmax_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmax_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmax_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmax_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmax_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmax_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmax_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmax_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmax_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmax_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmax_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmax_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmax_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmax_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmax_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmax_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmax_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmax_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmax_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmax_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmax_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmax_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmax_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmax_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmax_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmax_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmax_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmax_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmax_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmax_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmax_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmax_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmax_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmax_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmax_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmax_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmax_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmax_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmax_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmax_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmax_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmax_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmax_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmax_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmax_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmax_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmax_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmax_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmax_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmax_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmax_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmax_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmax_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmax_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmax_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmax_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmax_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmax_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmax_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmax_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmax_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmax_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmax_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmax_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmax_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmax_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmax_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmax_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmax_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmax_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmax_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmax_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmax_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmax_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmax_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmax_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmax_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmax_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmax_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmax_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmax_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmax_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmax_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmax_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmax_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmax_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmax_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmax_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmax_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmax_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmax_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmax_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmax_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmax_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmax_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmax_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmax_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmax_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmax_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmax_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmax_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmax_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmax_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmax_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmax_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmax_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmax_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmax_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmax_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmax_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmax_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmax_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmax_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmax_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmax_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmax_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmax_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmax_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmax_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmax_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmax_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmax_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmax_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmax_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmax_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmax_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmax_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmax_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmax_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmax_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmax_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmax_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmax_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmax_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmax_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmax_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmax_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmax_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmax_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmax_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmax_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmax_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmax_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmax_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmax_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmax_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmax_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmax_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmax_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmax_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmax_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmax_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmax_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmax_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmax_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmax_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmax_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmax_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmax_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmax_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmax_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmax_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmax_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmax_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmax_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmax_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmax_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmax_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmax_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmax_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmax_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmax_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmax_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmax_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmax_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmax_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmax_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmax_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmax_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmax_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmax_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmax_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmax_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmax_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmax_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmax_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmax_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmax_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmax_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmax_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmax_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmax_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmax_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmax_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmax_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmax_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmax_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmax_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmax_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmax_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmax_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmax_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmax_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmax_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmax_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmax_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmax_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmax_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmax_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmaxu.c b/auto-generated/policy_funcs/llvm-api-tests/vmaxu.c index f4228c360..73be8387a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmaxu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmaxu.c @@ -5,706 +5,939 @@ #include -vuint8mf8_t test_vmaxu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmaxu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vmaxu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmaxu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vmaxu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmaxu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vmaxu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmaxu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vmaxu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmaxu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vmaxu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmaxu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vmaxu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmaxu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vmaxu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmaxu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vmaxu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmaxu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vmaxu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmaxu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vmaxu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmaxu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vmaxu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmaxu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vmaxu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmaxu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vmaxu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmaxu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vmaxu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmaxu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vmaxu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmaxu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vmaxu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmaxu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vmaxu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmaxu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vmaxu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmaxu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vmaxu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmaxu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vmaxu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmaxu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vmaxu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmaxu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vmaxu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmaxu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vmaxu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmaxu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vmaxu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmaxu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vmaxu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmaxu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vmaxu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmaxu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vmaxu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmaxu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vmaxu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmaxu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vmaxu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vmaxu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmaxu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vmaxu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmaxu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vmaxu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmaxu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vmaxu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmaxu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vmaxu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmaxu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vmaxu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmaxu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vmaxu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmaxu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vmaxu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmaxu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vmaxu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vmaxu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmaxu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vmaxu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmaxu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vmaxu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmaxu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vmaxu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmaxu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vmaxu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmaxu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vmaxu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmaxu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vmaxu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmaxu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vmaxu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmaxu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmaxu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmaxu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmaxu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmaxu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmaxu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmaxu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmaxu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmaxu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmaxu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmaxu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmaxu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmaxu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmaxu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmaxu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmaxu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmaxu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmaxu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmaxu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmaxu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmaxu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmaxu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmaxu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmaxu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmaxu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmaxu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmaxu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmaxu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmaxu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmaxu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmaxu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmaxu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmaxu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmaxu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmaxu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmaxu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmaxu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmaxu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmaxu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmaxu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmaxu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmaxu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmaxu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmaxu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmaxu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmaxu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmaxu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmaxu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmaxu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmaxu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmaxu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmaxu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmaxu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmaxu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmaxu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmaxu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmaxu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmaxu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmaxu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmaxu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmaxu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmaxu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmaxu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmaxu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmaxu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmaxu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmaxu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmaxu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmaxu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmaxu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmaxu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmaxu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmaxu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmaxu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmaxu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmaxu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmaxu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmaxu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmaxu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmaxu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmaxu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmaxu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmaxu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmaxu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmaxu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmaxu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmaxu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vmaxu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmaxu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmaxu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmaxu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmaxu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmaxu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmaxu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmaxu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmaxu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmaxu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmaxu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmaxu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmaxu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmaxu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmaxu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmaxu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmaxu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmaxu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmaxu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmaxu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmaxu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmaxu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmaxu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmaxu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmaxu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmaxu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmaxu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmaxu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmaxu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmaxu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmaxu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmaxu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmaxu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmaxu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmaxu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmaxu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmaxu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmaxu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmaxu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmaxu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmaxu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmaxu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmaxu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmaxu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmaxu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmaxu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmaxu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmaxu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmaxu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmaxu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmaxu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmaxu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmaxu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmaxu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmaxu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmaxu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmaxu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmaxu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmaxu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmaxu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmaxu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmaxu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmaxu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmaxu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmaxu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmaxu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmaxu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmaxu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmaxu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmaxu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmaxu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmaxu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmaxu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmaxu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmaxu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmaxu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmaxu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmaxu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmaxu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmaxu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmaxu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmaxu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmaxu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmaxu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmaxu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmaxu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmaxu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmaxu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vmaxu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmaxu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmaxu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmaxu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmaxu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmaxu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmaxu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmaxu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmaxu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmaxu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmaxu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmaxu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmaxu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmaxu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmaxu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmaxu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmaxu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmaxu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmaxu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmaxu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmaxu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmaxu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmaxu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmaxu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmaxu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmaxu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmaxu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmaxu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmaxu_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmaxu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmaxu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmaxu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmaxu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmaxu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmaxu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmaxu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmaxu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmaxu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmaxu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmaxu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmaxu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmaxu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmaxu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmaxu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmaxu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmaxu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmaxu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmaxu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmaxu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmaxu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmaxu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmaxu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmaxu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmaxu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmaxu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmaxu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmaxu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmaxu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmaxu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmaxu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmaxu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmaxu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmaxu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmaxu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmaxu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmaxu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmaxu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmaxu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmaxu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmaxu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmaxu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmaxu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmaxu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmaxu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmaxu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmaxu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmaxu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmaxu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmaxu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmaxu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmaxu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmaxu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmaxu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmaxu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmaxu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmaxu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmaxu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmaxu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmaxu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmaxu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmaxu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmaxu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmaxu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmaxu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmaxu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmaxu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmerge.c b/auto-generated/policy_funcs/llvm-api-tests/vmerge.c index 4cb59d966..03a64d7ad 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmerge.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmerge.c @@ -6,414 +6,538 @@ #include -vint8mf8_t test_vmerge_vvm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, vbool64_t v0, size_t vl) { +vint8mf8_t test_vmerge_vvm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vvm_i8mf8_tu(vd, vs2, vs1, v0, vl); } -vint8mf8_t test_vmerge_vxm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, vbool64_t v0, size_t vl) { +vint8mf8_t test_vmerge_vxm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + vbool64_t v0, size_t vl) { return __riscv_vmerge_vxm_i8mf8_tu(vd, vs2, rs1, v0, vl); } -vint8mf4_t test_vmerge_vvm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, vbool32_t v0, size_t vl) { +vint8mf4_t test_vmerge_vvm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vvm_i8mf4_tu(vd, vs2, vs1, v0, vl); } -vint8mf4_t test_vmerge_vxm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, vbool32_t v0, size_t vl) { +vint8mf4_t test_vmerge_vxm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vmerge_vxm_i8mf4_tu(vd, vs2, rs1, v0, vl); } -vint8mf2_t test_vmerge_vvm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, vbool16_t v0, size_t vl) { +vint8mf2_t test_vmerge_vvm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vvm_i8mf2_tu(vd, vs2, vs1, v0, vl); } -vint8mf2_t test_vmerge_vxm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, vbool16_t v0, size_t vl) { +vint8mf2_t test_vmerge_vxm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vmerge_vxm_i8mf2_tu(vd, vs2, rs1, v0, vl); } -vint8m1_t test_vmerge_vvm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, vbool8_t v0, size_t vl) { +vint8m1_t test_vmerge_vvm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vmerge_vvm_i8m1_tu(vd, vs2, vs1, v0, vl); } -vint8m1_t test_vmerge_vxm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, vbool8_t v0, size_t vl) { +vint8m1_t test_vmerge_vxm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vmerge_vxm_i8m1_tu(vd, vs2, rs1, v0, vl); } -vint8m2_t test_vmerge_vvm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, vbool4_t v0, size_t vl) { +vint8m2_t test_vmerge_vvm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vmerge_vvm_i8m2_tu(vd, vs2, vs1, v0, vl); } -vint8m2_t test_vmerge_vxm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, vbool4_t v0, size_t vl) { +vint8m2_t test_vmerge_vxm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vmerge_vxm_i8m2_tu(vd, vs2, rs1, v0, vl); } -vint8m4_t test_vmerge_vvm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, vbool2_t v0, size_t vl) { +vint8m4_t test_vmerge_vvm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + vbool2_t v0, size_t vl) { return __riscv_vmerge_vvm_i8m4_tu(vd, vs2, vs1, v0, vl); } -vint8m4_t test_vmerge_vxm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, vbool2_t v0, size_t vl) { +vint8m4_t test_vmerge_vxm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vmerge_vxm_i8m4_tu(vd, vs2, rs1, v0, vl); } -vint8m8_t test_vmerge_vvm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, vbool1_t v0, size_t vl) { +vint8m8_t test_vmerge_vvm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + vbool1_t v0, size_t vl) { return __riscv_vmerge_vvm_i8m8_tu(vd, vs2, vs1, v0, vl); } -vint8m8_t test_vmerge_vxm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, vbool1_t v0, size_t vl) { +vint8m8_t test_vmerge_vxm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + vbool1_t v0, size_t vl) { return __riscv_vmerge_vxm_i8m8_tu(vd, vs2, rs1, v0, vl); } -vint16mf4_t test_vmerge_vvm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, vbool64_t v0, size_t vl) { +vint16mf4_t test_vmerge_vvm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vmerge_vvm_i16mf4_tu(vd, vs2, vs1, v0, vl); } -vint16mf4_t test_vmerge_vxm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, vbool64_t v0, size_t vl) { +vint16mf4_t test_vmerge_vxm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vxm_i16mf4_tu(vd, vs2, rs1, v0, vl); } -vint16mf2_t test_vmerge_vvm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, vbool32_t v0, size_t vl) { +vint16mf2_t test_vmerge_vvm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, vbool32_t v0, + size_t vl) { return __riscv_vmerge_vvm_i16mf2_tu(vd, vs2, vs1, v0, vl); } -vint16mf2_t test_vmerge_vxm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, vbool32_t v0, size_t vl) { +vint16mf2_t test_vmerge_vxm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vxm_i16mf2_tu(vd, vs2, rs1, v0, vl); } -vint16m1_t test_vmerge_vvm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, vbool16_t v0, size_t vl) { +vint16m1_t test_vmerge_vvm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vvm_i16m1_tu(vd, vs2, vs1, v0, vl); } -vint16m1_t test_vmerge_vxm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, vbool16_t v0, size_t vl) { +vint16m1_t test_vmerge_vxm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vmerge_vxm_i16m1_tu(vd, vs2, rs1, v0, vl); } -vint16m2_t test_vmerge_vvm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, vbool8_t v0, size_t vl) { +vint16m2_t test_vmerge_vvm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vvm_i16m2_tu(vd, vs2, vs1, v0, vl); } -vint16m2_t test_vmerge_vxm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, vbool8_t v0, size_t vl) { +vint16m2_t test_vmerge_vxm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vmerge_vxm_i16m2_tu(vd, vs2, rs1, v0, vl); } -vint16m4_t test_vmerge_vvm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, vbool4_t v0, size_t vl) { +vint16m4_t test_vmerge_vvm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, vbool4_t v0, size_t vl) { return __riscv_vmerge_vvm_i16m4_tu(vd, vs2, vs1, v0, vl); } -vint16m4_t test_vmerge_vxm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, vbool4_t v0, size_t vl) { +vint16m4_t test_vmerge_vxm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vmerge_vxm_i16m4_tu(vd, vs2, rs1, v0, vl); } -vint16m8_t test_vmerge_vvm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, vbool2_t v0, size_t vl) { +vint16m8_t test_vmerge_vvm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, vbool2_t v0, size_t vl) { return __riscv_vmerge_vvm_i16m8_tu(vd, vs2, vs1, v0, vl); } -vint16m8_t test_vmerge_vxm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, vbool2_t v0, size_t vl) { +vint16m8_t test_vmerge_vxm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vmerge_vxm_i16m8_tu(vd, vs2, rs1, v0, vl); } -vint32mf2_t test_vmerge_vvm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, vbool64_t v0, size_t vl) { +vint32mf2_t test_vmerge_vvm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vmerge_vvm_i32mf2_tu(vd, vs2, vs1, v0, vl); } -vint32mf2_t test_vmerge_vxm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, vbool64_t v0, size_t vl) { +vint32mf2_t test_vmerge_vxm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vxm_i32mf2_tu(vd, vs2, rs1, v0, vl); } -vint32m1_t test_vmerge_vvm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, vbool32_t v0, size_t vl) { +vint32m1_t test_vmerge_vvm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vvm_i32m1_tu(vd, vs2, vs1, v0, vl); } -vint32m1_t test_vmerge_vxm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, vbool32_t v0, size_t vl) { +vint32m1_t test_vmerge_vxm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vmerge_vxm_i32m1_tu(vd, vs2, rs1, v0, vl); } -vint32m2_t test_vmerge_vvm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, vbool16_t v0, size_t vl) { +vint32m2_t test_vmerge_vvm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vvm_i32m2_tu(vd, vs2, vs1, v0, vl); } -vint32m2_t test_vmerge_vxm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, vbool16_t v0, size_t vl) { +vint32m2_t test_vmerge_vxm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vmerge_vxm_i32m2_tu(vd, vs2, rs1, v0, vl); } -vint32m4_t test_vmerge_vvm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, vbool8_t v0, size_t vl) { +vint32m4_t test_vmerge_vvm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vvm_i32m4_tu(vd, vs2, vs1, v0, vl); } -vint32m4_t test_vmerge_vxm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, vbool8_t v0, size_t vl) { +vint32m4_t test_vmerge_vxm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vmerge_vxm_i32m4_tu(vd, vs2, rs1, v0, vl); } -vint32m8_t test_vmerge_vvm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, vbool4_t v0, size_t vl) { +vint32m8_t test_vmerge_vvm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, vbool4_t v0, size_t vl) { return __riscv_vmerge_vvm_i32m8_tu(vd, vs2, vs1, v0, vl); } -vint32m8_t test_vmerge_vxm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, vbool4_t v0, size_t vl) { +vint32m8_t test_vmerge_vxm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vmerge_vxm_i32m8_tu(vd, vs2, rs1, v0, vl); } -vint64m1_t test_vmerge_vvm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, vbool64_t v0, size_t vl) { +vint64m1_t test_vmerge_vvm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vvm_i64m1_tu(vd, vs2, vs1, v0, vl); } -vint64m1_t test_vmerge_vxm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, vbool64_t v0, size_t vl) { +vint64m1_t test_vmerge_vxm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + vbool64_t v0, size_t vl) { return __riscv_vmerge_vxm_i64m1_tu(vd, vs2, rs1, v0, vl); } -vint64m2_t test_vmerge_vvm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, vbool32_t v0, size_t vl) { +vint64m2_t test_vmerge_vvm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vvm_i64m2_tu(vd, vs2, vs1, v0, vl); } -vint64m2_t test_vmerge_vxm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, vbool32_t v0, size_t vl) { +vint64m2_t test_vmerge_vxm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vmerge_vxm_i64m2_tu(vd, vs2, rs1, v0, vl); } -vint64m4_t test_vmerge_vvm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, vbool16_t v0, size_t vl) { +vint64m4_t test_vmerge_vvm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vvm_i64m4_tu(vd, vs2, vs1, v0, vl); } -vint64m4_t test_vmerge_vxm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, vbool16_t v0, size_t vl) { +vint64m4_t test_vmerge_vxm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vmerge_vxm_i64m4_tu(vd, vs2, rs1, v0, vl); } -vint64m8_t test_vmerge_vvm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, vbool8_t v0, size_t vl) { +vint64m8_t test_vmerge_vvm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vvm_i64m8_tu(vd, vs2, vs1, v0, vl); } -vint64m8_t test_vmerge_vxm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, vbool8_t v0, size_t vl) { +vint64m8_t test_vmerge_vxm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vmerge_vxm_i64m8_tu(vd, vs2, rs1, v0, vl); } -vuint8mf8_t test_vmerge_vvm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, vbool64_t v0, size_t vl) { +vuint8mf8_t test_vmerge_vvm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vvm_u8mf8_tu(vd, vs2, vs1, v0, vl); } -vuint8mf8_t test_vmerge_vxm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, vbool64_t v0, size_t vl) { +vuint8mf8_t test_vmerge_vxm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vxm_u8mf8_tu(vd, vs2, rs1, v0, vl); } -vuint8mf4_t test_vmerge_vvm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, vbool32_t v0, size_t vl) { +vuint8mf4_t test_vmerge_vvm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vvm_u8mf4_tu(vd, vs2, vs1, v0, vl); } -vuint8mf4_t test_vmerge_vxm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, vbool32_t v0, size_t vl) { +vuint8mf4_t test_vmerge_vxm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vxm_u8mf4_tu(vd, vs2, rs1, v0, vl); } -vuint8mf2_t test_vmerge_vvm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, vbool16_t v0, size_t vl) { +vuint8mf2_t test_vmerge_vvm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vvm_u8mf2_tu(vd, vs2, vs1, v0, vl); } -vuint8mf2_t test_vmerge_vxm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, vbool16_t v0, size_t vl) { +vuint8mf2_t test_vmerge_vxm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vxm_u8mf2_tu(vd, vs2, rs1, v0, vl); } -vuint8m1_t test_vmerge_vvm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, vbool8_t v0, size_t vl) { +vuint8m1_t test_vmerge_vvm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vvm_u8m1_tu(vd, vs2, vs1, v0, vl); } -vuint8m1_t test_vmerge_vxm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, vbool8_t v0, size_t vl) { +vuint8m1_t test_vmerge_vxm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vmerge_vxm_u8m1_tu(vd, vs2, rs1, v0, vl); } -vuint8m2_t test_vmerge_vvm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, vbool4_t v0, size_t vl) { +vuint8m2_t test_vmerge_vvm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, vbool4_t v0, size_t vl) { return __riscv_vmerge_vvm_u8m2_tu(vd, vs2, vs1, v0, vl); } -vuint8m2_t test_vmerge_vxm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, vbool4_t v0, size_t vl) { +vuint8m2_t test_vmerge_vxm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vmerge_vxm_u8m2_tu(vd, vs2, rs1, v0, vl); } -vuint8m4_t test_vmerge_vvm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, vbool2_t v0, size_t vl) { +vuint8m4_t test_vmerge_vvm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, vbool2_t v0, size_t vl) { return __riscv_vmerge_vvm_u8m4_tu(vd, vs2, vs1, v0, vl); } -vuint8m4_t test_vmerge_vxm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, vbool2_t v0, size_t vl) { +vuint8m4_t test_vmerge_vxm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vmerge_vxm_u8m4_tu(vd, vs2, rs1, v0, vl); } -vuint8m8_t test_vmerge_vvm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, vbool1_t v0, size_t vl) { +vuint8m8_t test_vmerge_vvm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, vbool1_t v0, size_t vl) { return __riscv_vmerge_vvm_u8m8_tu(vd, vs2, vs1, v0, vl); } -vuint8m8_t test_vmerge_vxm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, vbool1_t v0, size_t vl) { +vuint8m8_t test_vmerge_vxm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + vbool1_t v0, size_t vl) { return __riscv_vmerge_vxm_u8m8_tu(vd, vs2, rs1, v0, vl); } -vuint16mf4_t test_vmerge_vvm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, vbool64_t v0, size_t vl) { +vuint16mf4_t test_vmerge_vvm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vmerge_vvm_u16mf4_tu(vd, vs2, vs1, v0, vl); } -vuint16mf4_t test_vmerge_vxm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, vbool64_t v0, size_t vl) { +vuint16mf4_t test_vmerge_vxm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vxm_u16mf4_tu(vd, vs2, rs1, v0, vl); } -vuint16mf2_t test_vmerge_vvm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, vbool32_t v0, size_t vl) { +vuint16mf2_t test_vmerge_vvm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, vbool32_t v0, + size_t vl) { return __riscv_vmerge_vvm_u16mf2_tu(vd, vs2, vs1, v0, vl); } -vuint16mf2_t test_vmerge_vxm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, vbool32_t v0, size_t vl) { +vuint16mf2_t test_vmerge_vxm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vxm_u16mf2_tu(vd, vs2, rs1, v0, vl); } -vuint16m1_t test_vmerge_vvm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, vbool16_t v0, size_t vl) { +vuint16m1_t test_vmerge_vvm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vvm_u16m1_tu(vd, vs2, vs1, v0, vl); } -vuint16m1_t test_vmerge_vxm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, vbool16_t v0, size_t vl) { +vuint16m1_t test_vmerge_vxm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vxm_u16m1_tu(vd, vs2, rs1, v0, vl); } -vuint16m2_t test_vmerge_vvm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, vbool8_t v0, size_t vl) { +vuint16m2_t test_vmerge_vvm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vvm_u16m2_tu(vd, vs2, vs1, v0, vl); } -vuint16m2_t test_vmerge_vxm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, vbool8_t v0, size_t vl) { +vuint16m2_t test_vmerge_vxm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vxm_u16m2_tu(vd, vs2, rs1, v0, vl); } -vuint16m4_t test_vmerge_vvm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, vbool4_t v0, size_t vl) { +vuint16m4_t test_vmerge_vvm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, vbool4_t v0, size_t vl) { return __riscv_vmerge_vvm_u16m4_tu(vd, vs2, vs1, v0, vl); } -vuint16m4_t test_vmerge_vxm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, vbool4_t v0, size_t vl) { +vuint16m4_t test_vmerge_vxm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, vbool4_t v0, size_t vl) { return __riscv_vmerge_vxm_u16m4_tu(vd, vs2, rs1, v0, vl); } -vuint16m8_t test_vmerge_vvm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, vbool2_t v0, size_t vl) { +vuint16m8_t test_vmerge_vvm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, vbool2_t v0, size_t vl) { return __riscv_vmerge_vvm_u16m8_tu(vd, vs2, vs1, v0, vl); } -vuint16m8_t test_vmerge_vxm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, vbool2_t v0, size_t vl) { +vuint16m8_t test_vmerge_vxm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, vbool2_t v0, size_t vl) { return __riscv_vmerge_vxm_u16m8_tu(vd, vs2, rs1, v0, vl); } -vuint32mf2_t test_vmerge_vvm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, vbool64_t v0, size_t vl) { +vuint32mf2_t test_vmerge_vvm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vmerge_vvm_u32mf2_tu(vd, vs2, vs1, v0, vl); } -vuint32mf2_t test_vmerge_vxm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, vbool64_t v0, size_t vl) { +vuint32mf2_t test_vmerge_vxm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vxm_u32mf2_tu(vd, vs2, rs1, v0, vl); } -vuint32m1_t test_vmerge_vvm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, vbool32_t v0, size_t vl) { +vuint32m1_t test_vmerge_vvm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vvm_u32m1_tu(vd, vs2, vs1, v0, vl); } -vuint32m1_t test_vmerge_vxm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, vbool32_t v0, size_t vl) { +vuint32m1_t test_vmerge_vxm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vxm_u32m1_tu(vd, vs2, rs1, v0, vl); } -vuint32m2_t test_vmerge_vvm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, vbool16_t v0, size_t vl) { +vuint32m2_t test_vmerge_vvm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vvm_u32m2_tu(vd, vs2, vs1, v0, vl); } -vuint32m2_t test_vmerge_vxm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, vbool16_t v0, size_t vl) { +vuint32m2_t test_vmerge_vxm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vxm_u32m2_tu(vd, vs2, rs1, v0, vl); } -vuint32m4_t test_vmerge_vvm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, vbool8_t v0, size_t vl) { +vuint32m4_t test_vmerge_vvm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vvm_u32m4_tu(vd, vs2, vs1, v0, vl); } -vuint32m4_t test_vmerge_vxm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, vbool8_t v0, size_t vl) { +vuint32m4_t test_vmerge_vxm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vxm_u32m4_tu(vd, vs2, rs1, v0, vl); } -vuint32m8_t test_vmerge_vvm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, vbool4_t v0, size_t vl) { +vuint32m8_t test_vmerge_vvm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, vbool4_t v0, size_t vl) { return __riscv_vmerge_vvm_u32m8_tu(vd, vs2, vs1, v0, vl); } -vuint32m8_t test_vmerge_vxm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, vbool4_t v0, size_t vl) { +vuint32m8_t test_vmerge_vxm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, vbool4_t v0, size_t vl) { return __riscv_vmerge_vxm_u32m8_tu(vd, vs2, rs1, v0, vl); } -vuint64m1_t test_vmerge_vvm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, vbool64_t v0, size_t vl) { +vuint64m1_t test_vmerge_vvm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vvm_u64m1_tu(vd, vs2, vs1, v0, vl); } -vuint64m1_t test_vmerge_vxm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, vbool64_t v0, size_t vl) { +vuint64m1_t test_vmerge_vxm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, vbool64_t v0, size_t vl) { return __riscv_vmerge_vxm_u64m1_tu(vd, vs2, rs1, v0, vl); } -vuint64m2_t test_vmerge_vvm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, vbool32_t v0, size_t vl) { +vuint64m2_t test_vmerge_vvm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vvm_u64m2_tu(vd, vs2, vs1, v0, vl); } -vuint64m2_t test_vmerge_vxm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, vbool32_t v0, size_t vl) { +vuint64m2_t test_vmerge_vxm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, vbool32_t v0, size_t vl) { return __riscv_vmerge_vxm_u64m2_tu(vd, vs2, rs1, v0, vl); } -vuint64m4_t test_vmerge_vvm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, vbool16_t v0, size_t vl) { +vuint64m4_t test_vmerge_vvm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vvm_u64m4_tu(vd, vs2, vs1, v0, vl); } -vuint64m4_t test_vmerge_vxm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, vbool16_t v0, size_t vl) { +vuint64m4_t test_vmerge_vxm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, vbool16_t v0, size_t vl) { return __riscv_vmerge_vxm_u64m4_tu(vd, vs2, rs1, v0, vl); } -vuint64m8_t test_vmerge_vvm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, vbool8_t v0, size_t vl) { +vuint64m8_t test_vmerge_vvm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vvm_u64m8_tu(vd, vs2, vs1, v0, vl); } -vuint64m8_t test_vmerge_vxm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, vbool8_t v0, size_t vl) { +vuint64m8_t test_vmerge_vxm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, vbool8_t v0, size_t vl) { return __riscv_vmerge_vxm_u64m8_tu(vd, vs2, rs1, v0, vl); } -vfloat16mf4_t test_vmerge_vvm_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, vbool64_t v0, size_t vl) { +vfloat16mf4_t test_vmerge_vvm_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vfloat16mf4_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vmerge_vvm_f16mf4_tu(vd, vs2, vs1, v0, vl); } -vfloat16mf2_t test_vmerge_vvm_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, vbool32_t v0, size_t vl) { +vfloat16mf2_t test_vmerge_vvm_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vfloat16mf2_t vs1, vbool32_t v0, + size_t vl) { return __riscv_vmerge_vvm_f16mf2_tu(vd, vs2, vs1, v0, vl); } -vfloat16m1_t test_vmerge_vvm_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, vbool16_t v0, size_t vl) { +vfloat16m1_t test_vmerge_vvm_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vfloat16m1_t vs1, vbool16_t v0, + size_t vl) { return __riscv_vmerge_vvm_f16m1_tu(vd, vs2, vs1, v0, vl); } -vfloat16m2_t test_vmerge_vvm_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, vbool8_t v0, size_t vl) { +vfloat16m2_t test_vmerge_vvm_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, vbool8_t v0, + size_t vl) { return __riscv_vmerge_vvm_f16m2_tu(vd, vs2, vs1, v0, vl); } -vfloat16m4_t test_vmerge_vvm_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, vbool4_t v0, size_t vl) { +vfloat16m4_t test_vmerge_vvm_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, vbool4_t v0, + size_t vl) { return __riscv_vmerge_vvm_f16m4_tu(vd, vs2, vs1, v0, vl); } -vfloat16m8_t test_vmerge_vvm_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, vbool2_t v0, size_t vl) { +vfloat16m8_t test_vmerge_vvm_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, vbool2_t v0, + size_t vl) { return __riscv_vmerge_vvm_f16m8_tu(vd, vs2, vs1, v0, vl); } -vfloat32mf2_t test_vmerge_vvm_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, vbool64_t v0, size_t vl) { +vfloat32mf2_t test_vmerge_vvm_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vfloat32mf2_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vmerge_vvm_f32mf2_tu(vd, vs2, vs1, v0, vl); } -vfloat32m1_t test_vmerge_vvm_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, vbool32_t v0, size_t vl) { +vfloat32m1_t test_vmerge_vvm_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vfloat32m1_t vs1, vbool32_t v0, + size_t vl) { return __riscv_vmerge_vvm_f32m1_tu(vd, vs2, vs1, v0, vl); } -vfloat32m2_t test_vmerge_vvm_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, vbool16_t v0, size_t vl) { +vfloat32m2_t test_vmerge_vvm_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vfloat32m2_t vs1, vbool16_t v0, + size_t vl) { return __riscv_vmerge_vvm_f32m2_tu(vd, vs2, vs1, v0, vl); } -vfloat32m4_t test_vmerge_vvm_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, vbool8_t v0, size_t vl) { +vfloat32m4_t test_vmerge_vvm_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, vbool8_t v0, + size_t vl) { return __riscv_vmerge_vvm_f32m4_tu(vd, vs2, vs1, v0, vl); } -vfloat32m8_t test_vmerge_vvm_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, vbool4_t v0, size_t vl) { +vfloat32m8_t test_vmerge_vvm_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, vbool4_t v0, + size_t vl) { return __riscv_vmerge_vvm_f32m8_tu(vd, vs2, vs1, v0, vl); } -vfloat64m1_t test_vmerge_vvm_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, vbool64_t v0, size_t vl) { +vfloat64m1_t test_vmerge_vvm_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vfloat64m1_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vmerge_vvm_f64m1_tu(vd, vs2, vs1, v0, vl); } -vfloat64m2_t test_vmerge_vvm_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, vbool32_t v0, size_t vl) { +vfloat64m2_t test_vmerge_vvm_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vfloat64m2_t vs1, vbool32_t v0, + size_t vl) { return __riscv_vmerge_vvm_f64m2_tu(vd, vs2, vs1, v0, vl); } -vfloat64m4_t test_vmerge_vvm_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, vbool16_t v0, size_t vl) { +vfloat64m4_t test_vmerge_vvm_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vfloat64m4_t vs1, vbool16_t v0, + size_t vl) { return __riscv_vmerge_vvm_f64m4_tu(vd, vs2, vs1, v0, vl); } -vfloat64m8_t test_vmerge_vvm_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, vbool8_t v0, size_t vl) { +vfloat64m8_t test_vmerge_vvm_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, vbool8_t v0, + size_t vl) { return __riscv_vmerge_vvm_f64m8_tu(vd, vs2, vs1, v0, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfeq.c b/auto-generated/policy_funcs/llvm-api-tests/vmfeq.c index 73a0660d9..9fcddbdcc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfeq.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfeq.c @@ -6,122 +6,164 @@ #include -vbool64_t test_vmfeq_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vbool64_t test_vmfeq_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfeq_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vbool64_t test_vmfeq_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfeq_vf_f16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfeq_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vbool32_t test_vmfeq_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfeq_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vbool32_t test_vmfeq_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfeq_vf_f16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfeq_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vbool16_t test_vmfeq_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfeq_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vbool16_t test_vmfeq_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfeq_vf_f16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfeq_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vbool8_t test_vmfeq_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vmfeq_vv_f16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfeq_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vbool8_t test_vmfeq_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfeq_vf_f16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfeq_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vbool4_t test_vmfeq_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vmfeq_vv_f16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfeq_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vbool4_t test_vmfeq_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfeq_vf_f16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmfeq_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vbool2_t test_vmfeq_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vmfeq_vv_f16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmfeq_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vbool2_t test_vmfeq_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfeq_vf_f16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfeq_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vbool64_t test_vmfeq_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfeq_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vbool64_t test_vmfeq_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vmfeq_vf_f32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfeq_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vbool32_t test_vmfeq_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfeq_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vbool32_t test_vmfeq_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vmfeq_vf_f32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfeq_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vbool16_t test_vmfeq_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfeq_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vbool16_t test_vmfeq_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vmfeq_vf_f32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfeq_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vbool8_t test_vmfeq_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vmfeq_vv_f32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfeq_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vbool8_t test_vmfeq_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vmfeq_vf_f32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfeq_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vbool4_t test_vmfeq_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vmfeq_vv_f32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfeq_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vbool4_t test_vmfeq_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vmfeq_vf_f32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfeq_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vbool64_t test_vmfeq_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfeq_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vbool64_t test_vmfeq_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vmfeq_vf_f64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfeq_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vbool32_t test_vmfeq_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfeq_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vbool32_t test_vmfeq_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vmfeq_vf_f64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfeq_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vbool16_t test_vmfeq_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vmfeq_vv_f64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfeq_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vbool16_t test_vmfeq_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vmfeq_vf_f64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfeq_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vbool8_t test_vmfeq_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vmfeq_vv_f64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfeq_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vbool8_t test_vmfeq_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vmfeq_vf_f64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfge.c b/auto-generated/policy_funcs/llvm-api-tests/vmfge.c index b4b373374..b4e0f8d02 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfge.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfge.c @@ -6,122 +6,164 @@ #include -vbool64_t test_vmfge_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vbool64_t test_vmfge_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vmfge_vv_f16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfge_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vbool64_t test_vmfge_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfge_vf_f16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfge_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vbool32_t test_vmfge_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vmfge_vv_f16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfge_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vbool32_t test_vmfge_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfge_vf_f16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfge_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vbool16_t test_vmfge_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vmfge_vv_f16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfge_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vbool16_t test_vmfge_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfge_vf_f16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfge_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vbool8_t test_vmfge_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vmfge_vv_f16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfge_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vbool8_t test_vmfge_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfge_vf_f16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfge_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vbool4_t test_vmfge_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vmfge_vv_f16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfge_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vbool4_t test_vmfge_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfge_vf_f16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmfge_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vbool2_t test_vmfge_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vmfge_vv_f16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmfge_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vbool2_t test_vmfge_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfge_vf_f16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfge_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vbool64_t test_vmfge_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vmfge_vv_f32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfge_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vbool64_t test_vmfge_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vmfge_vf_f32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfge_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vbool32_t test_vmfge_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vmfge_vv_f32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfge_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vbool32_t test_vmfge_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vmfge_vf_f32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfge_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vbool16_t test_vmfge_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vmfge_vv_f32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfge_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vbool16_t test_vmfge_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vmfge_vf_f32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfge_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vbool8_t test_vmfge_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vmfge_vv_f32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfge_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vbool8_t test_vmfge_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vmfge_vf_f32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfge_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vbool4_t test_vmfge_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vmfge_vv_f32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfge_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vbool4_t test_vmfge_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vmfge_vf_f32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfge_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vbool64_t test_vmfge_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vmfge_vv_f64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfge_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vbool64_t test_vmfge_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vmfge_vf_f64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfge_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vbool32_t test_vmfge_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vmfge_vv_f64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfge_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vbool32_t test_vmfge_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vmfge_vf_f64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfge_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vbool16_t test_vmfge_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vmfge_vv_f64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfge_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vbool16_t test_vmfge_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vmfge_vf_f64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfge_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vbool8_t test_vmfge_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vmfge_vv_f64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfge_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vbool8_t test_vmfge_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vmfge_vf_f64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfgt.c b/auto-generated/policy_funcs/llvm-api-tests/vmfgt.c index 30366708f..21595f6ef 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfgt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfgt.c @@ -6,122 +6,164 @@ #include -vbool64_t test_vmfgt_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vbool64_t test_vmfgt_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfgt_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vbool64_t test_vmfgt_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfgt_vf_f16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfgt_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vbool32_t test_vmfgt_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfgt_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vbool32_t test_vmfgt_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfgt_vf_f16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfgt_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vbool16_t test_vmfgt_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfgt_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vbool16_t test_vmfgt_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfgt_vf_f16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfgt_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vbool8_t test_vmfgt_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vmfgt_vv_f16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfgt_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vbool8_t test_vmfgt_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfgt_vf_f16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfgt_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vbool4_t test_vmfgt_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vmfgt_vv_f16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfgt_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vbool4_t test_vmfgt_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfgt_vf_f16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmfgt_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vbool2_t test_vmfgt_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vmfgt_vv_f16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmfgt_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vbool2_t test_vmfgt_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfgt_vf_f16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfgt_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vbool64_t test_vmfgt_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfgt_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vbool64_t test_vmfgt_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vmfgt_vf_f32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfgt_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vbool32_t test_vmfgt_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfgt_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vbool32_t test_vmfgt_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vmfgt_vf_f32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfgt_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vbool16_t test_vmfgt_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfgt_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vbool16_t test_vmfgt_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vmfgt_vf_f32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfgt_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vbool8_t test_vmfgt_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vmfgt_vv_f32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfgt_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vbool8_t test_vmfgt_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vmfgt_vf_f32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfgt_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vbool4_t test_vmfgt_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vmfgt_vv_f32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfgt_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vbool4_t test_vmfgt_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vmfgt_vf_f32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfgt_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vbool64_t test_vmfgt_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfgt_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vbool64_t test_vmfgt_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vmfgt_vf_f64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfgt_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vbool32_t test_vmfgt_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfgt_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vbool32_t test_vmfgt_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vmfgt_vf_f64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfgt_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vbool16_t test_vmfgt_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vmfgt_vv_f64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfgt_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vbool16_t test_vmfgt_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vmfgt_vf_f64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfgt_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vbool8_t test_vmfgt_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vmfgt_vv_f64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfgt_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vbool8_t test_vmfgt_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vmfgt_vf_f64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfle.c b/auto-generated/policy_funcs/llvm-api-tests/vmfle.c index c1900e963..1b1f8e1eb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfle.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfle.c @@ -6,122 +6,164 @@ #include -vbool64_t test_vmfle_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vbool64_t test_vmfle_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vmfle_vv_f16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfle_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vbool64_t test_vmfle_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfle_vf_f16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfle_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vbool32_t test_vmfle_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vmfle_vv_f16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfle_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vbool32_t test_vmfle_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfle_vf_f16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfle_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vbool16_t test_vmfle_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vmfle_vv_f16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfle_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vbool16_t test_vmfle_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfle_vf_f16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfle_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vbool8_t test_vmfle_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vmfle_vv_f16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfle_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vbool8_t test_vmfle_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfle_vf_f16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfle_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vbool4_t test_vmfle_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vmfle_vv_f16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfle_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vbool4_t test_vmfle_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfle_vf_f16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmfle_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vbool2_t test_vmfle_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vmfle_vv_f16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmfle_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vbool2_t test_vmfle_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfle_vf_f16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfle_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vbool64_t test_vmfle_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vmfle_vv_f32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfle_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vbool64_t test_vmfle_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vmfle_vf_f32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfle_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vbool32_t test_vmfle_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vmfle_vv_f32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfle_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vbool32_t test_vmfle_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vmfle_vf_f32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfle_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vbool16_t test_vmfle_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vmfle_vv_f32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfle_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vbool16_t test_vmfle_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vmfle_vf_f32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfle_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vbool8_t test_vmfle_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vmfle_vv_f32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfle_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vbool8_t test_vmfle_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vmfle_vf_f32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfle_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vbool4_t test_vmfle_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vmfle_vv_f32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfle_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vbool4_t test_vmfle_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vmfle_vf_f32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfle_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vbool64_t test_vmfle_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vmfle_vv_f64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfle_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vbool64_t test_vmfle_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vmfle_vf_f64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfle_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vbool32_t test_vmfle_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vmfle_vv_f64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfle_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vbool32_t test_vmfle_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vmfle_vf_f64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfle_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vbool16_t test_vmfle_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vmfle_vv_f64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfle_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vbool16_t test_vmfle_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vmfle_vf_f64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfle_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vbool8_t test_vmfle_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vmfle_vv_f64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfle_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vbool8_t test_vmfle_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vmfle_vf_f64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmflt.c b/auto-generated/policy_funcs/llvm-api-tests/vmflt.c index d02294919..121587e64 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmflt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmflt.c @@ -6,122 +6,164 @@ #include -vbool64_t test_vmflt_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vbool64_t test_vmflt_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vmflt_vv_f16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmflt_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vbool64_t test_vmflt_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmflt_vf_f16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmflt_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vbool32_t test_vmflt_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vmflt_vv_f16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmflt_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vbool32_t test_vmflt_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmflt_vf_f16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmflt_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vbool16_t test_vmflt_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vmflt_vv_f16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmflt_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vbool16_t test_vmflt_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmflt_vf_f16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmflt_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vbool8_t test_vmflt_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vmflt_vv_f16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmflt_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vbool8_t test_vmflt_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmflt_vf_f16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmflt_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vbool4_t test_vmflt_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vmflt_vv_f16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmflt_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vbool4_t test_vmflt_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmflt_vf_f16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmflt_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vbool2_t test_vmflt_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vmflt_vv_f16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmflt_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vbool2_t test_vmflt_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmflt_vf_f16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmflt_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vbool64_t test_vmflt_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vmflt_vv_f32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmflt_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vbool64_t test_vmflt_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vmflt_vf_f32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmflt_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vbool32_t test_vmflt_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vmflt_vv_f32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmflt_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vbool32_t test_vmflt_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vmflt_vf_f32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmflt_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vbool16_t test_vmflt_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vmflt_vv_f32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmflt_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vbool16_t test_vmflt_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vmflt_vf_f32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmflt_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vbool8_t test_vmflt_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vmflt_vv_f32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmflt_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vbool8_t test_vmflt_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vmflt_vf_f32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmflt_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vbool4_t test_vmflt_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vmflt_vv_f32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmflt_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vbool4_t test_vmflt_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vmflt_vf_f32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmflt_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vbool64_t test_vmflt_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vmflt_vv_f64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmflt_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vbool64_t test_vmflt_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vmflt_vf_f64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmflt_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vbool32_t test_vmflt_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vmflt_vv_f64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmflt_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vbool32_t test_vmflt_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vmflt_vf_f64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmflt_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vbool16_t test_vmflt_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vmflt_vv_f64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmflt_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vbool16_t test_vmflt_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vmflt_vf_f64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmflt_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vbool8_t test_vmflt_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vmflt_vv_f64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmflt_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vbool8_t test_vmflt_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vmflt_vf_f64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfne.c b/auto-generated/policy_funcs/llvm-api-tests/vmfne.c index 142d24a82..8529ce921 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfne.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfne.c @@ -6,122 +6,164 @@ #include -vbool64_t test_vmfne_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, vfloat16mf4_t vs1, size_t vl) { +vbool64_t test_vmfne_vv_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vmfne_vv_f16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfne_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vfloat16mf4_t vs2, _Float16 rs1, size_t vl) { +vbool64_t test_vmfne_vf_f16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat16mf4_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfne_vf_f16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfne_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, vfloat16mf2_t vs1, size_t vl) { +vbool32_t test_vmfne_vv_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vmfne_vv_f16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfne_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat16mf2_t vs2, _Float16 rs1, size_t vl) { +vbool32_t test_vmfne_vf_f16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat16mf2_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfne_vf_f16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfne_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, vfloat16m1_t vs1, size_t vl) { +vbool16_t test_vmfne_vv_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, vfloat16m1_t vs1, + size_t vl) { return __riscv_vmfne_vv_f16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfne_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, vfloat16m1_t vs2, _Float16 rs1, size_t vl) { +vbool16_t test_vmfne_vf_f16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat16m1_t vs2, _Float16 rs1, + size_t vl) { return __riscv_vmfne_vf_f16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfne_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, vfloat16m2_t vs1, size_t vl) { +vbool8_t test_vmfne_vv_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + vfloat16m2_t vs1, size_t vl) { return __riscv_vmfne_vv_f16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfne_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, _Float16 rs1, size_t vl) { +vbool8_t test_vmfne_vf_f16m2_b8_mu(vbool8_t vm, vbool8_t vd, vfloat16m2_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfne_vf_f16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfne_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, vfloat16m4_t vs1, size_t vl) { +vbool4_t test_vmfne_vv_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + vfloat16m4_t vs1, size_t vl) { return __riscv_vmfne_vv_f16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfne_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, _Float16 rs1, size_t vl) { +vbool4_t test_vmfne_vf_f16m4_b4_mu(vbool4_t vm, vbool4_t vd, vfloat16m4_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfne_vf_f16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmfne_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, vfloat16m8_t vs1, size_t vl) { +vbool2_t test_vmfne_vv_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + vfloat16m8_t vs1, size_t vl) { return __riscv_vmfne_vv_f16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmfne_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, _Float16 rs1, size_t vl) { +vbool2_t test_vmfne_vf_f16m8_b2_mu(vbool2_t vm, vbool2_t vd, vfloat16m8_t vs2, + _Float16 rs1, size_t vl) { return __riscv_vmfne_vf_f16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfne_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, vfloat32mf2_t vs1, size_t vl) { +vbool64_t test_vmfne_vv_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vmfne_vv_f32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfne_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vfloat32mf2_t vs2, float rs1, size_t vl) { +vbool64_t test_vmfne_vf_f32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat32mf2_t vs2, float rs1, size_t vl) { return __riscv_vmfne_vf_f32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfne_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, vfloat32m1_t vs1, size_t vl) { +vbool32_t test_vmfne_vv_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, vfloat32m1_t vs1, + size_t vl) { return __riscv_vmfne_vv_f32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfne_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, vfloat32m1_t vs2, float rs1, size_t vl) { +vbool32_t test_vmfne_vf_f32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat32m1_t vs2, float rs1, size_t vl) { return __riscv_vmfne_vf_f32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfne_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, vfloat32m2_t vs1, size_t vl) { +vbool16_t test_vmfne_vv_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, vfloat32m2_t vs1, + size_t vl) { return __riscv_vmfne_vv_f32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfne_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, vfloat32m2_t vs2, float rs1, size_t vl) { +vbool16_t test_vmfne_vf_f32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat32m2_t vs2, float rs1, size_t vl) { return __riscv_vmfne_vf_f32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfne_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, vfloat32m4_t vs1, size_t vl) { +vbool8_t test_vmfne_vv_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + vfloat32m4_t vs1, size_t vl) { return __riscv_vmfne_vv_f32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfne_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, float rs1, size_t vl) { +vbool8_t test_vmfne_vf_f32m4_b8_mu(vbool8_t vm, vbool8_t vd, vfloat32m4_t vs2, + float rs1, size_t vl) { return __riscv_vmfne_vf_f32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmfne_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, vfloat32m8_t vs1, size_t vl) { +vbool4_t test_vmfne_vv_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + vfloat32m8_t vs1, size_t vl) { return __riscv_vmfne_vv_f32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmfne_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, float rs1, size_t vl) { +vbool4_t test_vmfne_vf_f32m8_b4_mu(vbool4_t vm, vbool4_t vd, vfloat32m8_t vs2, + float rs1, size_t vl) { return __riscv_vmfne_vf_f32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmfne_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, vfloat64m1_t vs1, size_t vl) { +vbool64_t test_vmfne_vv_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, vfloat64m1_t vs1, + size_t vl) { return __riscv_vmfne_vv_f64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmfne_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, vfloat64m1_t vs2, double rs1, size_t vl) { +vbool64_t test_vmfne_vf_f64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vfloat64m1_t vs2, double rs1, size_t vl) { return __riscv_vmfne_vf_f64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmfne_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, vfloat64m2_t vs1, size_t vl) { +vbool32_t test_vmfne_vv_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, vfloat64m2_t vs1, + size_t vl) { return __riscv_vmfne_vv_f64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmfne_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, vfloat64m2_t vs2, double rs1, size_t vl) { +vbool32_t test_vmfne_vf_f64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vfloat64m2_t vs2, double rs1, size_t vl) { return __riscv_vmfne_vf_f64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmfne_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, vfloat64m4_t vs1, size_t vl) { +vbool16_t test_vmfne_vv_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, vfloat64m4_t vs1, + size_t vl) { return __riscv_vmfne_vv_f64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmfne_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, vfloat64m4_t vs2, double rs1, size_t vl) { +vbool16_t test_vmfne_vf_f64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vfloat64m4_t vs2, double rs1, size_t vl) { return __riscv_vmfne_vf_f64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmfne_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, vfloat64m8_t vs1, size_t vl) { +vbool8_t test_vmfne_vv_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + vfloat64m8_t vs1, size_t vl) { return __riscv_vmfne_vv_f64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmfne_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, double rs1, size_t vl) { +vbool8_t test_vmfne_vf_f64m8_b8_mu(vbool8_t vm, vbool8_t vd, vfloat64m8_t vs2, + double rs1, size_t vl) { return __riscv_vmfne_vf_f64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmin.c b/auto-generated/policy_funcs/llvm-api-tests/vmin.c index 1b3d1ed4e..9f9987f03 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmin.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmin.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vmin_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmin_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vmin_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vmin_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmin_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmin_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vmin_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmin_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vmin_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vmin_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmin_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmin_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vmin_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmin_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vmin_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vmin_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmin_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmin_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vmin_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmin_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vmin_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vmin_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmin_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmin_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vmin_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmin_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vmin_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vmin_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmin_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmin_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vmin_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmin_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vmin_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vmin_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmin_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmin_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vmin_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmin_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vmin_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vmin_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmin_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmin_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vmin_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmin_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vmin_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vmin_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmin_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmin_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vmin_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmin_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vmin_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vmin_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmin_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmin_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vmin_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmin_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vmin_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vmin_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmin_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmin_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vmin_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmin_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vmin_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vmin_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmin_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmin_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vmin_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmin_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vmin_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vmin_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmin_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmin_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vmin_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmin_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vmin_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vmin_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmin_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmin_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vmin_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmin_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vmin_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vmin_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmin_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmin_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vmin_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmin_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vmin_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vmin_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmin_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmin_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vmin_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmin_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vmin_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vmin_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmin_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmin_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vmin_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmin_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vmin_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vmin_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmin_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmin_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vmin_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmin_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vmin_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vmin_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmin_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmin_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vmin_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmin_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vmin_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vmin_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmin_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmin_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vmin_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmin_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vmin_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vmin_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmin_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmin_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vmin_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmin_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vmin_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vmin_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmin_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmin_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vmin_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmin_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vmin_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vmin_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmin_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmin_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vmin_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmin_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmin_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmin_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmin_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmin_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmin_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmin_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmin_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmin_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmin_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmin_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmin_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmin_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmin_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmin_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmin_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmin_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmin_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmin_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmin_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmin_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmin_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmin_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmin_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmin_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmin_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmin_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmin_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmin_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmin_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmin_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmin_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmin_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmin_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmin_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmin_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmin_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmin_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmin_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmin_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmin_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmin_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmin_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmin_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmin_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmin_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmin_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmin_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmin_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmin_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmin_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmin_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmin_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmin_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmin_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmin_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmin_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmin_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmin_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmin_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmin_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmin_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmin_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmin_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmin_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmin_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmin_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmin_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmin_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmin_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmin_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmin_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmin_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmin_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmin_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmin_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmin_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmin_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmin_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmin_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmin_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmin_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmin_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmin_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmin_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmin_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmin_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmin_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmin_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmin_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmin_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmin_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmin_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmin_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmin_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmin_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmin_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmin_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmin_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmin_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmin_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmin_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmin_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmin_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmin_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmin_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmin_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmin_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmin_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmin_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmin_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmin_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmin_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmin_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmin_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmin_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmin_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmin_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmin_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmin_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmin_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmin_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmin_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmin_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmin_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmin_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmin_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmin_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmin_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmin_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmin_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmin_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmin_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmin_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmin_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmin_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmin_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmin_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmin_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmin_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmin_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmin_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmin_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmin_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmin_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmin_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmin_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmin_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmin_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmin_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmin_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmin_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmin_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmin_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmin_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmin_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmin_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmin_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmin_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmin_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmin_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmin_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmin_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmin_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmin_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmin_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmin_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmin_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmin_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmin_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmin_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmin_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmin_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmin_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmin_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmin_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmin_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmin_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmin_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmin_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmin_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmin_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmin_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmin_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmin_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmin_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmin_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmin_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmin_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmin_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmin_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmin_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmin_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmin_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmin_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmin_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmin_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmin_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmin_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmin_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmin_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmin_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmin_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmin_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmin_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmin_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmin_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmin_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmin_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmin_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmin_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmin_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmin_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmin_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmin_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmin_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmin_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmin_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmin_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmin_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmin_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmin_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmin_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmin_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmin_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmin_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmin_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmin_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmin_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmin_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmin_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmin_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmin_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmin_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmin_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmin_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmin_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmin_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmin_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmin_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmin_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmin_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmin_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmin_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmin_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmin_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmin_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmin_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmin_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmin_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmin_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmin_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmin_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmin_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmin_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmin_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmin_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmin_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmin_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmin_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmin_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmin_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmin_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmin_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmin_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmin_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmin_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmin_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmin_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmin_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmin_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmin_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmin_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmin_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmin_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmin_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmin_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmin_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmin_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmin_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmin_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmin_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmin_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmin_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmin_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmin_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmin_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmin_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmin_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmin_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmin_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmin_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmin_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmin_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmin_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmin_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmin_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmin_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmin_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmin_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmin_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmin_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmin_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmin_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmin_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmin_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmin_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmin_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmin_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmin_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmin_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmin_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmin_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmin_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmin_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmin_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmin_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmin_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmin_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmin_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmin_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmin_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmin_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmin_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmin_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vminu.c b/auto-generated/policy_funcs/llvm-api-tests/vminu.c index 224966b85..adce43777 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vminu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vminu.c @@ -5,706 +5,939 @@ #include -vuint8mf8_t test_vminu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vminu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vminu_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vminu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vminu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vminu_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vminu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vminu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vminu_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vminu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vminu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vminu_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vminu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vminu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vminu_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vminu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vminu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vminu_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vminu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vminu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vminu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vminu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vminu_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vminu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vminu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vminu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vminu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vminu_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vminu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vminu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vminu_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vminu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vminu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vminu_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vminu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vminu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vminu_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vminu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vminu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vminu_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vminu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vminu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vminu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vminu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vminu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vminu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vminu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vminu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vminu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vminu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vminu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vminu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vminu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vminu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vminu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vminu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vminu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vminu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vminu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vminu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vminu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vminu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vminu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vminu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vminu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vminu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vminu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vminu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vminu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vminu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vminu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vminu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vminu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vminu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vminu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vminu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vminu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vminu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vminu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vminu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vminu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vminu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vminu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vminu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vminu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vminu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vminu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vminu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vminu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vminu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vminu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vminu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vminu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vminu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vminu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vminu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vminu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vminu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vminu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vminu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vminu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vminu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vminu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vminu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vminu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vminu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vminu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vminu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vminu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vminu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vminu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vminu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vminu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vminu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vminu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vminu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vminu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vminu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vminu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vminu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vminu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vminu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vminu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vminu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vminu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vminu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vminu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vminu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vminu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vminu_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vminu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vminu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vminu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vminu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vminu_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vminu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vminu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vminu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vminu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vminu_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vminu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vminu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vminu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vminu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vminu_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vminu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vminu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vminu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vminu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vminu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vminu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vminu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vminu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vminu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vminu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vminu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vminu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vminu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vminu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vminu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vminu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vminu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vminu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vminu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vminu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vminu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vminu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vminu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vminu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vminu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vminu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vminu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vminu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vminu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vminu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vminu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vminu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vminu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vminu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vminu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vminu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vminu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vminu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vminu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vminu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vminu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vminu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vminu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vminu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vminu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vminu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vminu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vminu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vminu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vminu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vminu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vminu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vminu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vminu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vminu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vminu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vminu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vminu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vminu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vminu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vminu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vminu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vminu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vminu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vminu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vminu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vminu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vminu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vminu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vminu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vminu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vminu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vminu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vminu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vminu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vminu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vminu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vminu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vminu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vminu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vminu_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vminu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vminu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vminu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vminu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vminu_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vminu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vminu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vminu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vminu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vminu_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vminu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vminu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vminu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vminu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vminu_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vminu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vminu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vminu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vminu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vminu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vminu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vminu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vminu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vminu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vminu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vminu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vminu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vminu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vminu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vminu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vminu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vminu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vminu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vminu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vminu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vminu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vminu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vminu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vminu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vminu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vminu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vminu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vminu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vminu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vminu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vminu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vminu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vminu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vminu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vminu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vminu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vminu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vminu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vminu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vminu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vminu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vminu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vminu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vminu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vminu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vminu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vminu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vminu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vminu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vminu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vminu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vminu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vminu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vminu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vminu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vminu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vminu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vminu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vminu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vminu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vminu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vminu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vminu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vminu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vminu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vminu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vminu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vminu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vminu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vminu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vminu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vminu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vminu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vminu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vminu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vminu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vminu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vminu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vminu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vminu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vminu_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vminu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vminu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vminu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vminu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vminu_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vminu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vminu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vminu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vminu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vminu_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vminu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vminu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vminu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vminu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vminu_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vminu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vminu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vminu_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vminu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vminu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vminu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vminu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vminu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vminu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vminu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vminu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vminu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vminu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vminu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vminu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vminu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vminu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vminu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vminu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vminu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vminu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vminu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vminu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vminu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vminu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vminu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vminu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vminu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vminu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vminu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vminu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vminu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vminu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vminu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vminu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vminu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vminu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vminu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vminu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vminu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vminu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vminu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vminu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vminu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vminu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vminu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vminu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vminu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vminu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vminu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vminu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vminu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vminu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vminu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vminu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vminu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vminu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vminu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vminu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vminu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vminu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vminu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vminu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vminu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vminu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vminu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vminu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vminu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vminu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vminu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vminu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vminu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vminu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vminu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vminu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vminu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vminu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vminu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsbf.c b/auto-generated/policy_funcs/llvm-api-tests/vmsbf.c index e8fed2f23..a6e4bf118 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsbf.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsbf.c @@ -21,14 +21,17 @@ vbool8_t test_vmsbf_m_b8_mu(vbool8_t vm, vbool8_t vd, vbool8_t vs2, size_t vl) { return __riscv_vmsbf_m_b8_mu(vm, vd, vs2, vl); } -vbool16_t test_vmsbf_m_b16_mu(vbool16_t vm, vbool16_t vd, vbool16_t vs2, size_t vl) { +vbool16_t test_vmsbf_m_b16_mu(vbool16_t vm, vbool16_t vd, vbool16_t vs2, + size_t vl) { return __riscv_vmsbf_m_b16_mu(vm, vd, vs2, vl); } -vbool32_t test_vmsbf_m_b32_mu(vbool32_t vm, vbool32_t vd, vbool32_t vs2, size_t vl) { +vbool32_t test_vmsbf_m_b32_mu(vbool32_t vm, vbool32_t vd, vbool32_t vs2, + size_t vl) { return __riscv_vmsbf_m_b32_mu(vm, vd, vs2, vl); } -vbool64_t test_vmsbf_m_b64_mu(vbool64_t vm, vbool64_t vd, vbool64_t vs2, size_t vl) { +vbool64_t test_vmsbf_m_b64_mu(vbool64_t vm, vbool64_t vd, vbool64_t vs2, + size_t vl) { return __riscv_vmsbf_m_b64_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmseq.c b/auto-generated/policy_funcs/llvm-api-tests/vmseq.c index 70952eaba..7bd8df825 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmseq.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmseq.c @@ -6,354 +6,460 @@ #include -vbool64_t test_vmseq_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vbool64_t test_vmseq_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmseq_vv_i8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmseq_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vbool64_t test_vmseq_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmseq_vx_i8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmseq_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vbool32_t test_vmseq_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmseq_vv_i8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmseq_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vbool32_t test_vmseq_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmseq_vx_i8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmseq_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vbool16_t test_vmseq_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmseq_vv_i8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmseq_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vbool16_t test_vmseq_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmseq_vx_i8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmseq_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vbool8_t test_vmseq_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmseq_vv_i8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmseq_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vbool8_t test_vmseq_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmseq_vx_i8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmseq_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vbool4_t test_vmseq_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmseq_vv_i8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmseq_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vbool4_t test_vmseq_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmseq_vx_i8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmseq_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vbool2_t test_vmseq_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmseq_vv_i8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmseq_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vbool2_t test_vmseq_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmseq_vx_i8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmseq_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vbool1_t test_vmseq_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmseq_vv_i8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmseq_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vbool1_t test_vmseq_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmseq_vx_i8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmseq_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vbool64_t test_vmseq_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmseq_vv_i16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmseq_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vbool64_t test_vmseq_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmseq_vx_i16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmseq_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vbool32_t test_vmseq_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmseq_vv_i16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmseq_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vbool32_t test_vmseq_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmseq_vx_i16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmseq_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vbool16_t test_vmseq_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmseq_vv_i16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmseq_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vbool16_t test_vmseq_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmseq_vx_i16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmseq_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vbool8_t test_vmseq_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmseq_vv_i16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmseq_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vbool8_t test_vmseq_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmseq_vx_i16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmseq_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vbool4_t test_vmseq_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmseq_vv_i16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmseq_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vbool4_t test_vmseq_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmseq_vx_i16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmseq_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vbool2_t test_vmseq_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmseq_vv_i16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmseq_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vbool2_t test_vmseq_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmseq_vx_i16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmseq_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vbool64_t test_vmseq_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmseq_vv_i32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmseq_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vbool64_t test_vmseq_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmseq_vx_i32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmseq_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vbool32_t test_vmseq_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmseq_vv_i32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmseq_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vbool32_t test_vmseq_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmseq_vx_i32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmseq_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vbool16_t test_vmseq_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmseq_vv_i32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmseq_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vbool16_t test_vmseq_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmseq_vx_i32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmseq_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vbool8_t test_vmseq_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmseq_vv_i32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmseq_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vbool8_t test_vmseq_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmseq_vx_i32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmseq_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vbool4_t test_vmseq_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmseq_vv_i32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmseq_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vbool4_t test_vmseq_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmseq_vx_i32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmseq_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vbool64_t test_vmseq_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmseq_vv_i64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmseq_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vbool64_t test_vmseq_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmseq_vx_i64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmseq_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vbool32_t test_vmseq_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmseq_vv_i64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmseq_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vbool32_t test_vmseq_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmseq_vx_i64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmseq_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vbool16_t test_vmseq_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmseq_vv_i64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmseq_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vbool16_t test_vmseq_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmseq_vx_i64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmseq_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vbool8_t test_vmseq_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmseq_vv_i64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmseq_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vbool8_t test_vmseq_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmseq_vx_i64m8_b8_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmseq_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vbool64_t test_vmseq_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmseq_vv_u8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmseq_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vbool64_t test_vmseq_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmseq_vx_u8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmseq_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vbool32_t test_vmseq_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmseq_vv_u8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmseq_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vbool32_t test_vmseq_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmseq_vx_u8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmseq_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vbool16_t test_vmseq_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmseq_vv_u8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmseq_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vbool16_t test_vmseq_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmseq_vx_u8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmseq_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vbool8_t test_vmseq_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmseq_vv_u8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmseq_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vbool8_t test_vmseq_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmseq_vx_u8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmseq_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vbool4_t test_vmseq_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmseq_vv_u8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmseq_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vbool4_t test_vmseq_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmseq_vx_u8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmseq_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vbool2_t test_vmseq_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmseq_vv_u8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmseq_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vbool2_t test_vmseq_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmseq_vx_u8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmseq_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vbool1_t test_vmseq_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmseq_vv_u8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmseq_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vbool1_t test_vmseq_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmseq_vx_u8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmseq_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vbool64_t test_vmseq_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmseq_vv_u16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmseq_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vbool64_t test_vmseq_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmseq_vx_u16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmseq_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vbool32_t test_vmseq_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmseq_vv_u16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmseq_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vbool32_t test_vmseq_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmseq_vx_u16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmseq_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vbool16_t test_vmseq_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmseq_vv_u16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmseq_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vbool16_t test_vmseq_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmseq_vx_u16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmseq_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vbool8_t test_vmseq_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmseq_vv_u16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmseq_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vbool8_t test_vmseq_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmseq_vx_u16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmseq_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vbool4_t test_vmseq_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmseq_vv_u16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmseq_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vbool4_t test_vmseq_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmseq_vx_u16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmseq_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vbool2_t test_vmseq_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmseq_vv_u16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmseq_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vbool2_t test_vmseq_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmseq_vx_u16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmseq_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vbool64_t test_vmseq_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmseq_vv_u32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmseq_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vbool64_t test_vmseq_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmseq_vx_u32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmseq_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vbool32_t test_vmseq_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmseq_vv_u32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmseq_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vbool32_t test_vmseq_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmseq_vx_u32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmseq_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vbool16_t test_vmseq_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmseq_vv_u32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmseq_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vbool16_t test_vmseq_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmseq_vx_u32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmseq_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vbool8_t test_vmseq_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmseq_vv_u32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmseq_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vbool8_t test_vmseq_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmseq_vx_u32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmseq_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vbool4_t test_vmseq_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmseq_vv_u32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmseq_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vbool4_t test_vmseq_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmseq_vx_u32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmseq_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vbool64_t test_vmseq_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmseq_vv_u64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmseq_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vbool64_t test_vmseq_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmseq_vx_u64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmseq_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vbool32_t test_vmseq_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmseq_vv_u64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmseq_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vbool32_t test_vmseq_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmseq_vx_u64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmseq_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vbool16_t test_vmseq_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmseq_vv_u64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmseq_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vbool16_t test_vmseq_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmseq_vx_u64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmseq_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vbool8_t test_vmseq_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmseq_vv_u64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmseq_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vbool8_t test_vmseq_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmseq_vx_u64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsge.c b/auto-generated/policy_funcs/llvm-api-tests/vmsge.c index 38c42dd59..3a35b27de 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsge.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsge.c @@ -6,178 +6,225 @@ #include -vbool64_t test_vmsge_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsge_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmsge_vv_i8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsge_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vbool64_t test_vmsge_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsge_vx_i8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsge_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsge_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmsge_vv_i8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsge_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vbool32_t test_vmsge_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsge_vx_i8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsge_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsge_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmsge_vv_i8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsge_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vbool16_t test_vmsge_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsge_vx_i8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsge_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vbool8_t test_vmsge_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmsge_vv_i8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsge_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vbool8_t test_vmsge_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsge_vx_i8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsge_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vbool4_t test_vmsge_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmsge_vv_i8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsge_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vbool4_t test_vmsge_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsge_vx_i8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsge_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vbool2_t test_vmsge_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmsge_vv_i8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsge_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vbool2_t test_vmsge_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsge_vx_i8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsge_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vbool1_t test_vmsge_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmsge_vv_i8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsge_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vbool1_t test_vmsge_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsge_vx_i8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsge_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsge_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmsge_vv_i16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsge_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vbool64_t test_vmsge_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmsge_vx_i16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsge_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsge_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmsge_vv_i16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsge_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vbool32_t test_vmsge_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmsge_vx_i16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsge_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vbool16_t test_vmsge_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmsge_vv_i16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsge_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vbool16_t test_vmsge_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsge_vx_i16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsge_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vbool8_t test_vmsge_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmsge_vv_i16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsge_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vbool8_t test_vmsge_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsge_vx_i16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsge_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vbool4_t test_vmsge_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmsge_vv_i16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsge_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vbool4_t test_vmsge_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsge_vx_i16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsge_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vbool2_t test_vmsge_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmsge_vv_i16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsge_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vbool2_t test_vmsge_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsge_vx_i16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsge_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsge_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmsge_vv_i32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsge_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vbool64_t test_vmsge_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmsge_vx_i32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsge_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vbool32_t test_vmsge_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmsge_vv_i32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsge_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vbool32_t test_vmsge_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsge_vx_i32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsge_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vbool16_t test_vmsge_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmsge_vv_i32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsge_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vbool16_t test_vmsge_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsge_vx_i32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsge_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vbool8_t test_vmsge_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmsge_vv_i32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsge_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vbool8_t test_vmsge_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsge_vx_i32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsge_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vbool4_t test_vmsge_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmsge_vv_i32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsge_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vbool4_t test_vmsge_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsge_vx_i32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsge_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vbool64_t test_vmsge_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmsge_vv_i64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsge_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vbool64_t test_vmsge_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsge_vx_i64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsge_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vbool32_t test_vmsge_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmsge_vv_i64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsge_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vbool32_t test_vmsge_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsge_vx_i64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsge_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vbool16_t test_vmsge_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmsge_vv_i64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsge_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vbool16_t test_vmsge_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsge_vx_i64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsge_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vbool8_t test_vmsge_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmsge_vv_i64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsge_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vbool8_t test_vmsge_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsge_vx_i64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c b/auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c index c5ebbb932..10a6b2f82 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c @@ -6,178 +6,243 @@ #include -vbool64_t test_vmsgeu_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsgeu_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgeu_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vbool64_t test_vmsgeu_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgeu_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsgeu_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgeu_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vbool32_t test_vmsgeu_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgeu_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsgeu_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgeu_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vbool16_t test_vmsgeu_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgeu_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vbool8_t test_vmsgeu_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgeu_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vbool8_t test_vmsgeu_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgeu_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vbool4_t test_vmsgeu_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgeu_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vbool4_t test_vmsgeu_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsgeu_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vbool2_t test_vmsgeu_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsgeu_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vbool2_t test_vmsgeu_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsgeu_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vbool1_t test_vmsgeu_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsgeu_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vbool1_t test_vmsgeu_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgeu_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsgeu_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgeu_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vbool64_t test_vmsgeu_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgeu_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsgeu_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgeu_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vbool32_t test_vmsgeu_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgeu_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vbool16_t test_vmsgeu_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgeu_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vbool16_t test_vmsgeu_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgeu_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vbool8_t test_vmsgeu_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgeu_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vbool8_t test_vmsgeu_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgeu_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vbool4_t test_vmsgeu_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgeu_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vbool4_t test_vmsgeu_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsgeu_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vbool2_t test_vmsgeu_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsgeu_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vbool2_t test_vmsgeu_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgeu_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsgeu_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgeu_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vbool64_t test_vmsgeu_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgeu_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vbool32_t test_vmsgeu_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgeu_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vbool32_t test_vmsgeu_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgeu_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vbool16_t test_vmsgeu_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgeu_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vbool16_t test_vmsgeu_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgeu_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vbool8_t test_vmsgeu_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgeu_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vbool8_t test_vmsgeu_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgeu_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vbool4_t test_vmsgeu_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgeu_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vbool4_t test_vmsgeu_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgeu_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vbool64_t test_vmsgeu_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgeu_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vbool64_t test_vmsgeu_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgeu_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vbool32_t test_vmsgeu_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgeu_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vbool32_t test_vmsgeu_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgeu_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vbool16_t test_vmsgeu_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmsgeu_vv_u64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgeu_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vbool16_t test_vmsgeu_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsgeu_vx_u64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgeu_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vbool8_t test_vmsgeu_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmsgeu_vv_u64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgeu_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vbool8_t test_vmsgeu_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmsgeu_vx_u64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsgt.c b/auto-generated/policy_funcs/llvm-api-tests/vmsgt.c index 62b84314f..6ae082474 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsgt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsgt.c @@ -6,178 +6,225 @@ #include -vbool64_t test_vmsgt_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsgt_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmsgt_vv_i8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgt_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vbool64_t test_vmsgt_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsgt_vx_i8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgt_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsgt_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmsgt_vv_i8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgt_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vbool32_t test_vmsgt_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsgt_vx_i8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgt_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsgt_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmsgt_vv_i8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgt_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vbool16_t test_vmsgt_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsgt_vx_i8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgt_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vbool8_t test_vmsgt_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmsgt_vv_i8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgt_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vbool8_t test_vmsgt_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsgt_vx_i8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgt_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vbool4_t test_vmsgt_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmsgt_vv_i8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgt_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vbool4_t test_vmsgt_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsgt_vx_i8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsgt_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vbool2_t test_vmsgt_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmsgt_vv_i8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsgt_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vbool2_t test_vmsgt_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsgt_vx_i8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsgt_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vbool1_t test_vmsgt_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmsgt_vv_i8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsgt_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vbool1_t test_vmsgt_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsgt_vx_i8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgt_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsgt_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmsgt_vv_i16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgt_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vbool64_t test_vmsgt_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmsgt_vx_i16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgt_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsgt_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmsgt_vv_i16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgt_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vbool32_t test_vmsgt_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmsgt_vx_i16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgt_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vbool16_t test_vmsgt_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmsgt_vv_i16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgt_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vbool16_t test_vmsgt_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsgt_vx_i16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgt_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vbool8_t test_vmsgt_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmsgt_vv_i16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgt_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vbool8_t test_vmsgt_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsgt_vx_i16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgt_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vbool4_t test_vmsgt_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmsgt_vv_i16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgt_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vbool4_t test_vmsgt_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsgt_vx_i16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsgt_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vbool2_t test_vmsgt_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmsgt_vv_i16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsgt_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vbool2_t test_vmsgt_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsgt_vx_i16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgt_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsgt_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmsgt_vv_i32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgt_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vbool64_t test_vmsgt_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmsgt_vx_i32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgt_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vbool32_t test_vmsgt_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmsgt_vv_i32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgt_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vbool32_t test_vmsgt_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsgt_vx_i32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgt_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vbool16_t test_vmsgt_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmsgt_vv_i32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgt_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vbool16_t test_vmsgt_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsgt_vx_i32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgt_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vbool8_t test_vmsgt_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmsgt_vv_i32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgt_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vbool8_t test_vmsgt_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsgt_vx_i32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgt_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vbool4_t test_vmsgt_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmsgt_vv_i32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgt_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vbool4_t test_vmsgt_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsgt_vx_i32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgt_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vbool64_t test_vmsgt_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmsgt_vv_i64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgt_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vbool64_t test_vmsgt_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsgt_vx_i64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgt_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vbool32_t test_vmsgt_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmsgt_vv_i64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgt_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vbool32_t test_vmsgt_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsgt_vx_i64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgt_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vbool16_t test_vmsgt_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmsgt_vv_i64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgt_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vbool16_t test_vmsgt_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsgt_vx_i64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgt_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vbool8_t test_vmsgt_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmsgt_vv_i64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgt_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vbool8_t test_vmsgt_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsgt_vx_i64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c b/auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c index af5c1969b..1bd0be85d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c @@ -6,178 +6,243 @@ #include -vbool64_t test_vmsgtu_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsgtu_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgtu_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vbool64_t test_vmsgtu_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgtu_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsgtu_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgtu_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vbool32_t test_vmsgtu_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgtu_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsgtu_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgtu_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vbool16_t test_vmsgtu_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgtu_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vbool8_t test_vmsgtu_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgtu_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vbool8_t test_vmsgtu_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgtu_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vbool4_t test_vmsgtu_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgtu_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vbool4_t test_vmsgtu_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsgtu_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vbool2_t test_vmsgtu_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsgtu_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vbool2_t test_vmsgtu_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsgtu_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vbool1_t test_vmsgtu_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsgtu_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vbool1_t test_vmsgtu_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgtu_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsgtu_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgtu_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vbool64_t test_vmsgtu_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgtu_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsgtu_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgtu_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vbool32_t test_vmsgtu_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgtu_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vbool16_t test_vmsgtu_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgtu_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vbool16_t test_vmsgtu_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgtu_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vbool8_t test_vmsgtu_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgtu_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vbool8_t test_vmsgtu_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgtu_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vbool4_t test_vmsgtu_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgtu_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vbool4_t test_vmsgtu_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsgtu_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vbool2_t test_vmsgtu_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsgtu_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vbool2_t test_vmsgtu_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgtu_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsgtu_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgtu_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vbool64_t test_vmsgtu_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgtu_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vbool32_t test_vmsgtu_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgtu_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vbool32_t test_vmsgtu_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgtu_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vbool16_t test_vmsgtu_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgtu_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vbool16_t test_vmsgtu_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgtu_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vbool8_t test_vmsgtu_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgtu_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vbool8_t test_vmsgtu_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsgtu_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vbool4_t test_vmsgtu_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsgtu_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vbool4_t test_vmsgtu_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsgtu_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vbool64_t test_vmsgtu_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsgtu_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vbool64_t test_vmsgtu_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsgtu_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vbool32_t test_vmsgtu_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsgtu_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vbool32_t test_vmsgtu_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsgtu_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vbool16_t test_vmsgtu_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmsgtu_vv_u64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsgtu_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vbool16_t test_vmsgtu_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsgtu_vx_u64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsgtu_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vbool8_t test_vmsgtu_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmsgtu_vv_u64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsgtu_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vbool8_t test_vmsgtu_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmsgtu_vx_u64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsif.c b/auto-generated/policy_funcs/llvm-api-tests/vmsif.c index 429e35992..d91e7391f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsif.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsif.c @@ -21,14 +21,17 @@ vbool8_t test_vmsif_m_b8_mu(vbool8_t vm, vbool8_t vd, vbool8_t vs2, size_t vl) { return __riscv_vmsif_m_b8_mu(vm, vd, vs2, vl); } -vbool16_t test_vmsif_m_b16_mu(vbool16_t vm, vbool16_t vd, vbool16_t vs2, size_t vl) { +vbool16_t test_vmsif_m_b16_mu(vbool16_t vm, vbool16_t vd, vbool16_t vs2, + size_t vl) { return __riscv_vmsif_m_b16_mu(vm, vd, vs2, vl); } -vbool32_t test_vmsif_m_b32_mu(vbool32_t vm, vbool32_t vd, vbool32_t vs2, size_t vl) { +vbool32_t test_vmsif_m_b32_mu(vbool32_t vm, vbool32_t vd, vbool32_t vs2, + size_t vl) { return __riscv_vmsif_m_b32_mu(vm, vd, vs2, vl); } -vbool64_t test_vmsif_m_b64_mu(vbool64_t vm, vbool64_t vd, vbool64_t vs2, size_t vl) { +vbool64_t test_vmsif_m_b64_mu(vbool64_t vm, vbool64_t vd, vbool64_t vs2, + size_t vl) { return __riscv_vmsif_m_b64_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsle.c b/auto-generated/policy_funcs/llvm-api-tests/vmsle.c index 33832a5b2..24285d266 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsle.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsle.c @@ -6,178 +6,225 @@ #include -vbool64_t test_vmsle_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsle_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmsle_vv_i8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsle_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vbool64_t test_vmsle_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsle_vx_i8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsle_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsle_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmsle_vv_i8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsle_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vbool32_t test_vmsle_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsle_vx_i8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsle_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsle_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmsle_vv_i8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsle_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vbool16_t test_vmsle_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsle_vx_i8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsle_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vbool8_t test_vmsle_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmsle_vv_i8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsle_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vbool8_t test_vmsle_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsle_vx_i8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsle_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vbool4_t test_vmsle_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmsle_vv_i8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsle_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vbool4_t test_vmsle_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsle_vx_i8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsle_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vbool2_t test_vmsle_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmsle_vv_i8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsle_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vbool2_t test_vmsle_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsle_vx_i8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsle_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vbool1_t test_vmsle_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmsle_vv_i8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsle_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vbool1_t test_vmsle_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsle_vx_i8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsle_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsle_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmsle_vv_i16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsle_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vbool64_t test_vmsle_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmsle_vx_i16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsle_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsle_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmsle_vv_i16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsle_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vbool32_t test_vmsle_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmsle_vx_i16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsle_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vbool16_t test_vmsle_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmsle_vv_i16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsle_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vbool16_t test_vmsle_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsle_vx_i16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsle_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vbool8_t test_vmsle_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmsle_vv_i16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsle_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vbool8_t test_vmsle_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsle_vx_i16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsle_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vbool4_t test_vmsle_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmsle_vv_i16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsle_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vbool4_t test_vmsle_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsle_vx_i16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsle_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vbool2_t test_vmsle_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmsle_vv_i16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsle_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vbool2_t test_vmsle_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsle_vx_i16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsle_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsle_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmsle_vv_i32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsle_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vbool64_t test_vmsle_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmsle_vx_i32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsle_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vbool32_t test_vmsle_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmsle_vv_i32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsle_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vbool32_t test_vmsle_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsle_vx_i32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsle_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vbool16_t test_vmsle_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmsle_vv_i32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsle_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vbool16_t test_vmsle_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsle_vx_i32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsle_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vbool8_t test_vmsle_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmsle_vv_i32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsle_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vbool8_t test_vmsle_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsle_vx_i32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsle_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vbool4_t test_vmsle_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmsle_vv_i32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsle_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vbool4_t test_vmsle_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsle_vx_i32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsle_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vbool64_t test_vmsle_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmsle_vv_i64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsle_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vbool64_t test_vmsle_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsle_vx_i64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsle_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vbool32_t test_vmsle_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmsle_vv_i64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsle_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vbool32_t test_vmsle_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsle_vx_i64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsle_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vbool16_t test_vmsle_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmsle_vv_i64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsle_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vbool16_t test_vmsle_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsle_vx_i64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsle_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vbool8_t test_vmsle_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmsle_vv_i64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsle_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vbool8_t test_vmsle_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsle_vx_i64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsleu.c b/auto-generated/policy_funcs/llvm-api-tests/vmsleu.c index 689d3c8a9..98b7c7af0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsleu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsleu.c @@ -6,178 +6,243 @@ #include -vbool64_t test_vmsleu_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsleu_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsleu_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vbool64_t test_vmsleu_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsleu_vx_u8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsleu_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsleu_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsleu_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vbool32_t test_vmsleu_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsleu_vx_u8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsleu_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsleu_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsleu_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vbool16_t test_vmsleu_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsleu_vx_u8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsleu_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vbool8_t test_vmsleu_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmsleu_vv_u8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsleu_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vbool8_t test_vmsleu_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsleu_vx_u8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsleu_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vbool4_t test_vmsleu_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmsleu_vv_u8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsleu_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vbool4_t test_vmsleu_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsleu_vx_u8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsleu_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vbool2_t test_vmsleu_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmsleu_vv_u8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsleu_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vbool2_t test_vmsleu_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsleu_vx_u8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsleu_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vbool1_t test_vmsleu_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmsleu_vv_u8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsleu_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vbool1_t test_vmsleu_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsleu_vx_u8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsleu_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsleu_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsleu_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vbool64_t test_vmsleu_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsleu_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsleu_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsleu_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vbool32_t test_vmsleu_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsleu_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vbool16_t test_vmsleu_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsleu_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vbool16_t test_vmsleu_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsleu_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vbool8_t test_vmsleu_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmsleu_vv_u16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsleu_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vbool8_t test_vmsleu_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsleu_vx_u16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsleu_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vbool4_t test_vmsleu_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmsleu_vv_u16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsleu_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vbool4_t test_vmsleu_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsleu_vx_u16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsleu_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vbool2_t test_vmsleu_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmsleu_vv_u16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsleu_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vbool2_t test_vmsleu_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsleu_vx_u16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsleu_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsleu_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsleu_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vbool64_t test_vmsleu_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsleu_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vbool32_t test_vmsleu_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsleu_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vbool32_t test_vmsleu_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsleu_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vbool16_t test_vmsleu_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsleu_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vbool16_t test_vmsleu_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsleu_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vbool8_t test_vmsleu_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmsleu_vv_u32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsleu_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vbool8_t test_vmsleu_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsleu_vx_u32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsleu_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vbool4_t test_vmsleu_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmsleu_vv_u32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsleu_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vbool4_t test_vmsleu_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsleu_vx_u32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsleu_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vbool64_t test_vmsleu_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsleu_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vbool64_t test_vmsleu_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsleu_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vbool32_t test_vmsleu_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsleu_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vbool32_t test_vmsleu_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsleu_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vbool16_t test_vmsleu_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmsleu_vv_u64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsleu_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vbool16_t test_vmsleu_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsleu_vx_u64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsleu_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vbool8_t test_vmsleu_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmsleu_vv_u64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsleu_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vbool8_t test_vmsleu_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmsleu_vx_u64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmslt.c b/auto-generated/policy_funcs/llvm-api-tests/vmslt.c index ecf8118e3..d21a5009f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmslt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmslt.c @@ -6,178 +6,225 @@ #include -vbool64_t test_vmslt_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vbool64_t test_vmslt_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmslt_vv_i8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmslt_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vbool64_t test_vmslt_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmslt_vx_i8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmslt_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vbool32_t test_vmslt_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmslt_vv_i8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmslt_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vbool32_t test_vmslt_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmslt_vx_i8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmslt_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vbool16_t test_vmslt_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmslt_vv_i8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmslt_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vbool16_t test_vmslt_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmslt_vx_i8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmslt_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vbool8_t test_vmslt_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmslt_vv_i8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmslt_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vbool8_t test_vmslt_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmslt_vx_i8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmslt_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vbool4_t test_vmslt_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmslt_vv_i8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmslt_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vbool4_t test_vmslt_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmslt_vx_i8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmslt_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vbool2_t test_vmslt_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmslt_vv_i8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmslt_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vbool2_t test_vmslt_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmslt_vx_i8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmslt_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vbool1_t test_vmslt_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmslt_vv_i8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmslt_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vbool1_t test_vmslt_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmslt_vx_i8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmslt_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vbool64_t test_vmslt_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmslt_vv_i16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmslt_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vbool64_t test_vmslt_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmslt_vx_i16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmslt_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vbool32_t test_vmslt_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmslt_vv_i16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmslt_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vbool32_t test_vmslt_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmslt_vx_i16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmslt_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vbool16_t test_vmslt_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmslt_vv_i16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmslt_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vbool16_t test_vmslt_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmslt_vx_i16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmslt_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vbool8_t test_vmslt_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmslt_vv_i16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmslt_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vbool8_t test_vmslt_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmslt_vx_i16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmslt_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vbool4_t test_vmslt_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmslt_vv_i16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmslt_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vbool4_t test_vmslt_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmslt_vx_i16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmslt_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vbool2_t test_vmslt_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmslt_vv_i16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmslt_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vbool2_t test_vmslt_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmslt_vx_i16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmslt_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vbool64_t test_vmslt_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmslt_vv_i32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmslt_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vbool64_t test_vmslt_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmslt_vx_i32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmslt_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vbool32_t test_vmslt_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmslt_vv_i32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmslt_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vbool32_t test_vmslt_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmslt_vx_i32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmslt_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vbool16_t test_vmslt_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmslt_vv_i32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmslt_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vbool16_t test_vmslt_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmslt_vx_i32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmslt_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vbool8_t test_vmslt_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmslt_vv_i32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmslt_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vbool8_t test_vmslt_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmslt_vx_i32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmslt_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vbool4_t test_vmslt_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmslt_vv_i32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmslt_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vbool4_t test_vmslt_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmslt_vx_i32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmslt_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vbool64_t test_vmslt_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmslt_vv_i64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmslt_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vbool64_t test_vmslt_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmslt_vx_i64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmslt_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vbool32_t test_vmslt_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmslt_vv_i64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmslt_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vbool32_t test_vmslt_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmslt_vx_i64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmslt_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vbool16_t test_vmslt_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmslt_vv_i64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmslt_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vbool16_t test_vmslt_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmslt_vx_i64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmslt_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vbool8_t test_vmslt_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmslt_vv_i64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmslt_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vbool8_t test_vmslt_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmslt_vx_i64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsltu.c b/auto-generated/policy_funcs/llvm-api-tests/vmsltu.c index 583f88f0d..717394b85 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsltu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsltu.c @@ -6,178 +6,243 @@ #include -vbool64_t test_vmsltu_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsltu_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsltu_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vbool64_t test_vmsltu_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsltu_vx_u8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsltu_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsltu_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsltu_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vbool32_t test_vmsltu_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsltu_vx_u8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsltu_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsltu_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsltu_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vbool16_t test_vmsltu_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsltu_vx_u8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsltu_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vbool8_t test_vmsltu_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmsltu_vv_u8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsltu_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vbool8_t test_vmsltu_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsltu_vx_u8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsltu_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vbool4_t test_vmsltu_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmsltu_vv_u8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsltu_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vbool4_t test_vmsltu_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsltu_vx_u8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsltu_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vbool2_t test_vmsltu_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmsltu_vv_u8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsltu_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vbool2_t test_vmsltu_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsltu_vx_u8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsltu_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vbool1_t test_vmsltu_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmsltu_vv_u8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsltu_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vbool1_t test_vmsltu_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsltu_vx_u8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsltu_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsltu_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsltu_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vbool64_t test_vmsltu_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsltu_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsltu_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsltu_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vbool32_t test_vmsltu_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsltu_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vbool16_t test_vmsltu_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsltu_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vbool16_t test_vmsltu_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsltu_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vbool8_t test_vmsltu_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmsltu_vv_u16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsltu_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vbool8_t test_vmsltu_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsltu_vx_u16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsltu_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vbool4_t test_vmsltu_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmsltu_vv_u16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsltu_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vbool4_t test_vmsltu_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsltu_vx_u16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsltu_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vbool2_t test_vmsltu_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmsltu_vv_u16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsltu_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vbool2_t test_vmsltu_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsltu_vx_u16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsltu_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsltu_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsltu_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vbool64_t test_vmsltu_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsltu_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vbool32_t test_vmsltu_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsltu_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vbool32_t test_vmsltu_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsltu_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vbool16_t test_vmsltu_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsltu_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vbool16_t test_vmsltu_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsltu_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vbool8_t test_vmsltu_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmsltu_vv_u32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsltu_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vbool8_t test_vmsltu_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsltu_vx_u32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsltu_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vbool4_t test_vmsltu_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmsltu_vv_u32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsltu_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vbool4_t test_vmsltu_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsltu_vx_u32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsltu_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vbool64_t test_vmsltu_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsltu_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vbool64_t test_vmsltu_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsltu_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vbool32_t test_vmsltu_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsltu_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vbool32_t test_vmsltu_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsltu_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vbool16_t test_vmsltu_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmsltu_vv_u64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsltu_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vbool16_t test_vmsltu_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmsltu_vx_u64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsltu_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vbool8_t test_vmsltu_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmsltu_vv_u64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsltu_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vbool8_t test_vmsltu_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmsltu_vx_u64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsne.c b/auto-generated/policy_funcs/llvm-api-tests/vmsne.c index 7fae96175..59479abb2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsne.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsne.c @@ -6,354 +6,460 @@ #include -vbool64_t test_vmsne_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsne_vv_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmsne_vv_i8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsne_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vbool64_t test_vmsne_vx_i8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsne_vx_i8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsne_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsne_vv_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmsne_vv_i8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsne_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vbool32_t test_vmsne_vx_i8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsne_vx_i8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsne_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsne_vv_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmsne_vv_i8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsne_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vbool16_t test_vmsne_vx_i8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsne_vx_i8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsne_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vbool8_t test_vmsne_vv_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmsne_vv_i8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsne_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vbool8_t test_vmsne_vx_i8m1_b8_mu(vbool8_t vm, vbool8_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsne_vx_i8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsne_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vbool4_t test_vmsne_vv_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmsne_vv_i8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsne_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vbool4_t test_vmsne_vx_i8m2_b4_mu(vbool4_t vm, vbool4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsne_vx_i8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsne_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vbool2_t test_vmsne_vv_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmsne_vv_i8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsne_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vbool2_t test_vmsne_vx_i8m4_b2_mu(vbool2_t vm, vbool2_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsne_vx_i8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsne_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vbool1_t test_vmsne_vv_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmsne_vv_i8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsne_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vbool1_t test_vmsne_vx_i8m8_b1_mu(vbool1_t vm, vbool1_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmsne_vx_i8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsne_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsne_vv_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmsne_vv_i16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsne_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vbool64_t test_vmsne_vx_i16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmsne_vx_i16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsne_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsne_vv_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmsne_vv_i16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsne_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vbool32_t test_vmsne_vx_i16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmsne_vx_i16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsne_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vbool16_t test_vmsne_vv_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmsne_vv_i16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsne_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vbool16_t test_vmsne_vx_i16m1_b16_mu(vbool16_t vm, vbool16_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsne_vx_i16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsne_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vbool8_t test_vmsne_vv_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmsne_vv_i16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsne_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vbool8_t test_vmsne_vx_i16m2_b8_mu(vbool8_t vm, vbool8_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsne_vx_i16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsne_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vbool4_t test_vmsne_vv_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmsne_vv_i16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsne_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vbool4_t test_vmsne_vx_i16m4_b4_mu(vbool4_t vm, vbool4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsne_vx_i16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsne_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vbool2_t test_vmsne_vv_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmsne_vv_i16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsne_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vbool2_t test_vmsne_vx_i16m8_b2_mu(vbool2_t vm, vbool2_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmsne_vx_i16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsne_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsne_vv_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmsne_vv_i32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsne_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vbool64_t test_vmsne_vx_i32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmsne_vx_i32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsne_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vbool32_t test_vmsne_vv_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmsne_vv_i32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsne_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vbool32_t test_vmsne_vx_i32m1_b32_mu(vbool32_t vm, vbool32_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsne_vx_i32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsne_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vbool16_t test_vmsne_vv_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmsne_vv_i32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsne_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vbool16_t test_vmsne_vx_i32m2_b16_mu(vbool16_t vm, vbool16_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsne_vx_i32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsne_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vbool8_t test_vmsne_vv_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmsne_vv_i32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsne_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vbool8_t test_vmsne_vx_i32m4_b8_mu(vbool8_t vm, vbool8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsne_vx_i32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsne_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vbool4_t test_vmsne_vv_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmsne_vv_i32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsne_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vbool4_t test_vmsne_vx_i32m8_b4_mu(vbool4_t vm, vbool4_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmsne_vx_i32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsne_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vbool64_t test_vmsne_vv_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmsne_vv_i64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsne_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vbool64_t test_vmsne_vx_i64m1_b64_mu(vbool64_t vm, vbool64_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsne_vx_i64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsne_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vbool32_t test_vmsne_vv_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmsne_vv_i64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsne_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vbool32_t test_vmsne_vx_i64m2_b32_mu(vbool32_t vm, vbool32_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsne_vx_i64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsne_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vbool16_t test_vmsne_vv_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmsne_vv_i64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsne_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vbool16_t test_vmsne_vx_i64m4_b16_mu(vbool16_t vm, vbool16_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsne_vx_i64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsne_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vbool8_t test_vmsne_vv_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmsne_vv_i64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsne_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vbool8_t test_vmsne_vx_i64m8_b8_mu(vbool8_t vm, vbool8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmsne_vx_i64m8_b8_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsne_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vbool64_t test_vmsne_vv_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmsne_vv_u8mf8_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsne_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vbool64_t test_vmsne_vx_u8mf8_b64_mu(vbool64_t vm, vbool64_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsne_vx_u8mf8_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsne_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vbool32_t test_vmsne_vv_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmsne_vv_u8mf4_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsne_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vbool32_t test_vmsne_vx_u8mf4_b32_mu(vbool32_t vm, vbool32_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsne_vx_u8mf4_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsne_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vbool16_t test_vmsne_vv_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmsne_vv_u8mf2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsne_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vbool16_t test_vmsne_vx_u8mf2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmsne_vx_u8mf2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsne_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vbool8_t test_vmsne_vv_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmsne_vv_u8m1_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsne_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vbool8_t test_vmsne_vx_u8m1_b8_mu(vbool8_t vm, vbool8_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsne_vx_u8m1_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsne_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vbool4_t test_vmsne_vv_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmsne_vv_u8m2_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsne_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vbool4_t test_vmsne_vx_u8m2_b4_mu(vbool4_t vm, vbool4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsne_vx_u8m2_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsne_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vbool2_t test_vmsne_vv_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmsne_vv_u8m4_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsne_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vbool2_t test_vmsne_vx_u8m4_b2_mu(vbool2_t vm, vbool2_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsne_vx_u8m4_b2_mu(vm, vd, vs2, rs1, vl); } -vbool1_t test_vmsne_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vbool1_t test_vmsne_vv_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmsne_vv_u8m8_b1_mu(vm, vd, vs2, vs1, vl); } -vbool1_t test_vmsne_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vbool1_t test_vmsne_vx_u8m8_b1_mu(vbool1_t vm, vbool1_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmsne_vx_u8m8_b1_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsne_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vbool64_t test_vmsne_vv_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmsne_vv_u16mf4_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsne_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vbool64_t test_vmsne_vx_u16mf4_b64_mu(vbool64_t vm, vbool64_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsne_vx_u16mf4_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsne_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vbool32_t test_vmsne_vv_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmsne_vv_u16mf2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsne_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vbool32_t test_vmsne_vx_u16mf2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmsne_vx_u16mf2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsne_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vbool16_t test_vmsne_vv_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmsne_vv_u16m1_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsne_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vbool16_t test_vmsne_vx_u16m1_b16_mu(vbool16_t vm, vbool16_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmsne_vx_u16m1_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsne_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vbool8_t test_vmsne_vv_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmsne_vv_u16m2_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsne_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vbool8_t test_vmsne_vx_u16m2_b8_mu(vbool8_t vm, vbool8_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsne_vx_u16m2_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsne_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vbool4_t test_vmsne_vv_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmsne_vv_u16m4_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsne_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vbool4_t test_vmsne_vx_u16m4_b4_mu(vbool4_t vm, vbool4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsne_vx_u16m4_b4_mu(vm, vd, vs2, rs1, vl); } -vbool2_t test_vmsne_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vbool2_t test_vmsne_vv_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmsne_vv_u16m8_b2_mu(vm, vd, vs2, vs1, vl); } -vbool2_t test_vmsne_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vbool2_t test_vmsne_vx_u16m8_b2_mu(vbool2_t vm, vbool2_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmsne_vx_u16m8_b2_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsne_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vbool64_t test_vmsne_vv_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmsne_vv_u32mf2_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsne_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vbool64_t test_vmsne_vx_u32mf2_b64_mu(vbool64_t vm, vbool64_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmsne_vx_u32mf2_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsne_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vbool32_t test_vmsne_vv_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmsne_vv_u32m1_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsne_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vbool32_t test_vmsne_vx_u32m1_b32_mu(vbool32_t vm, vbool32_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmsne_vx_u32m1_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsne_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vbool16_t test_vmsne_vv_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmsne_vv_u32m2_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsne_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vbool16_t test_vmsne_vx_u32m2_b16_mu(vbool16_t vm, vbool16_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmsne_vx_u32m2_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsne_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vbool8_t test_vmsne_vv_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmsne_vv_u32m4_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsne_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vbool8_t test_vmsne_vx_u32m4_b8_mu(vbool8_t vm, vbool8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsne_vx_u32m4_b8_mu(vm, vd, vs2, rs1, vl); } -vbool4_t test_vmsne_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vbool4_t test_vmsne_vv_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmsne_vv_u32m8_b4_mu(vm, vd, vs2, vs1, vl); } -vbool4_t test_vmsne_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vbool4_t test_vmsne_vx_u32m8_b4_mu(vbool4_t vm, vbool4_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmsne_vx_u32m8_b4_mu(vm, vd, vs2, rs1, vl); } -vbool64_t test_vmsne_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vbool64_t test_vmsne_vv_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmsne_vv_u64m1_b64_mu(vm, vd, vs2, vs1, vl); } -vbool64_t test_vmsne_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vbool64_t test_vmsne_vx_u64m1_b64_mu(vbool64_t vm, vbool64_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmsne_vx_u64m1_b64_mu(vm, vd, vs2, rs1, vl); } -vbool32_t test_vmsne_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vbool32_t test_vmsne_vv_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmsne_vv_u64m2_b32_mu(vm, vd, vs2, vs1, vl); } -vbool32_t test_vmsne_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vbool32_t test_vmsne_vx_u64m2_b32_mu(vbool32_t vm, vbool32_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmsne_vx_u64m2_b32_mu(vm, vd, vs2, rs1, vl); } -vbool16_t test_vmsne_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vbool16_t test_vmsne_vv_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmsne_vv_u64m4_b16_mu(vm, vd, vs2, vs1, vl); } -vbool16_t test_vmsne_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vbool16_t test_vmsne_vx_u64m4_b16_mu(vbool16_t vm, vbool16_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmsne_vx_u64m4_b16_mu(vm, vd, vs2, rs1, vl); } -vbool8_t test_vmsne_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vbool8_t test_vmsne_vv_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmsne_vv_u64m8_b8_mu(vm, vd, vs2, vs1, vl); } -vbool8_t test_vmsne_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vbool8_t test_vmsne_vx_u64m8_b8_mu(vbool8_t vm, vbool8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmsne_vx_u64m8_b8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsof.c b/auto-generated/policy_funcs/llvm-api-tests/vmsof.c index f16b44c18..8dd9f96ab 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsof.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsof.c @@ -21,14 +21,17 @@ vbool8_t test_vmsof_m_b8_mu(vbool8_t vm, vbool8_t vd, vbool8_t vs2, size_t vl) { return __riscv_vmsof_m_b8_mu(vm, vd, vs2, vl); } -vbool16_t test_vmsof_m_b16_mu(vbool16_t vm, vbool16_t vd, vbool16_t vs2, size_t vl) { +vbool16_t test_vmsof_m_b16_mu(vbool16_t vm, vbool16_t vd, vbool16_t vs2, + size_t vl) { return __riscv_vmsof_m_b16_mu(vm, vd, vs2, vl); } -vbool32_t test_vmsof_m_b32_mu(vbool32_t vm, vbool32_t vd, vbool32_t vs2, size_t vl) { +vbool32_t test_vmsof_m_b32_mu(vbool32_t vm, vbool32_t vd, vbool32_t vs2, + size_t vl) { return __riscv_vmsof_m_b32_mu(vm, vd, vs2, vl); } -vbool64_t test_vmsof_m_b64_mu(vbool64_t vm, vbool64_t vd, vbool64_t vs2, size_t vl) { +vbool64_t test_vmsof_m_b64_mu(vbool64_t vm, vbool64_t vd, vbool64_t vs2, + size_t vl) { return __riscv_vmsof_m_b64_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmul.c b/auto-generated/policy_funcs/llvm-api-tests/vmul.c index edb9cd306..1e155c454 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmul.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmul.c @@ -5,1410 +5,1810 @@ #include -vint8mf8_t test_vmul_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmul_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vmul_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vmul_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmul_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmul_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vmul_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmul_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vmul_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmul_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmul_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vmul_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmul_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vmul_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmul_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmul_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vmul_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmul_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vmul_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vmul_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmul_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmul_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vmul_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmul_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vmul_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vmul_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmul_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmul_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vmul_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmul_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vmul_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vmul_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmul_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmul_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vmul_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmul_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vmul_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vmul_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmul_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmul_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vmul_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmul_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vmul_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vmul_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmul_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmul_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vmul_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmul_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vmul_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vmul_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmul_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmul_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vmul_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmul_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vmul_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vmul_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmul_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmul_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vmul_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmul_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vmul_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vmul_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmul_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmul_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vmul_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmul_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vmul_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vmul_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmul_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmul_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vmul_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmul_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vmul_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vmul_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmul_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmul_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vmul_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmul_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vmul_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vmul_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmul_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmul_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vmul_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmul_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vmul_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vmul_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmul_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmul_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vmul_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmul_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vmul_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vmul_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmul_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmul_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vmul_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmul_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vmul_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vmul_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmul_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmul_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vmul_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmul_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vmul_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vmul_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmul_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmul_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vmul_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmul_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vmul_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vmul_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmul_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmul_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vmul_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmul_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vmul_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vmul_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmul_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmul_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vmul_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmul_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vmul_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vmul_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmul_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmul_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vmul_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmul_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vmul_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vmul_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmul_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmul_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vmul_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmul_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vmul_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vmul_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmul_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmul_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vmul_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmul_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vmul_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vmul_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmul_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmul_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vmul_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmul_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vmul_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vmul_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmul_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmul_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vmul_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmul_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vmul_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vmul_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmul_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmul_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vmul_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmul_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vmul_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vmul_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmul_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmul_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vmul_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmul_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vmul_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vmul_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmul_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmul_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vmul_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmul_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vmul_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vmul_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmul_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmul_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vmul_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmul_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vmul_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vmul_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmul_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vmul_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmul_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vmul_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vmul_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmul_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vmul_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmul_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vmul_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vmul_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmul_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmul_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vmul_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmul_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmul_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vmul_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmul_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmul_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vmul_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmul_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmul_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vmul_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmul_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmul_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vmul_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmul_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmul_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vmul_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmul_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmul_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmul_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vmul_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vmul_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmul_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmul_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vmul_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vmul_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmul_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmul_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmul_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vmul_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vmul_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmul_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmul_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmul_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmul_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vmul_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmul_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmul_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmul_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmul_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vmul_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmul_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmul_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmul_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vmul_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmul_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmul_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmul_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vmul_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmul_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmul_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmul_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vmul_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmul_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmul_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmul_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmul_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmul_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmul_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vmul_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmul_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmul_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmul_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmul_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmul_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmul_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmul_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmul_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmul_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmul_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmul_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmul_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmul_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmul_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmul_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmul_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmul_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmul_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmul_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmul_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmul_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmul_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmul_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmul_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmul_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmul_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmul_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmul_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmul_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmul_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmul_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmul_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmul_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmul_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmul_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmul_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmul_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmul_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmul_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmul_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmul_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmul_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmul_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmul_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmul_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmul_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmul_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmul_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmul_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmul_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmul_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmul_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmul_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmul_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmul_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmul_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmul_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmul_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmul_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmul_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmul_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmul_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmul_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmul_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmul_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmul_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmul_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmul_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmul_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmul_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmul_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmul_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmul_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmul_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmul_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmul_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmul_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmul_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmul_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmul_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmul_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmul_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmul_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmul_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmul_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmul_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmul_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmul_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmul_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmul_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmul_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmul_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmul_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmul_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmul_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmul_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmul_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmul_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmul_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmul_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmul_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmul_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmul_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vmul_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmul_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmul_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmul_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmul_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmul_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmul_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmul_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmul_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmul_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmul_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmul_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmul_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmul_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmul_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmul_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmul_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmul_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmul_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmul_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmul_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmul_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmul_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmul_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmul_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmul_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmul_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmul_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmul_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmul_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmul_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmul_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmul_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmul_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmul_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmul_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmul_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmul_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmul_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmul_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmul_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmul_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmul_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmul_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmul_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmul_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmul_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmul_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmul_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmul_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmul_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmul_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmul_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmul_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmul_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmul_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmul_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmul_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmul_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmul_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmul_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmul_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmul_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmul_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmul_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmul_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmul_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmul_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmul_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmul_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmul_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmul_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmul_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmul_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmul_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmul_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmul_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmul_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmul_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmul_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmul_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmul_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmul_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmul_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmul_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmul_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmul_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmul_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmul_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmul_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmul_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmul_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmul_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmul_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmul_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmul_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmul_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmul_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmul_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmul_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmul_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmul_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmul_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmul_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmul_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmul_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmul_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmul_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmul_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmul_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmul_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmul_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmul_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmul_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmul_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmul_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmul_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmul_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmul_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmul_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmul_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmul_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmul_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmul_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmul_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmul_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmul_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmul_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmul_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmul_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmul_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmul_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmul_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmul_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmul_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmul_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmul_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmul_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmul_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmul_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmul_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmul_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmul_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmul_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmul_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmul_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmul_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmul_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmul_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmul_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmul_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmul_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmul_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmul_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmul_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmul_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmul_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmul_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmul_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmul_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmul_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmul_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmul_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmul_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmul_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmul_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmul_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmul_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmul_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmul_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmul_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmul_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmul_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmul_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmul_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmul_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmul_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmul_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmul_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmul_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmul_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmul_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmul_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmul_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmul_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmul_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmul_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmul_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmul_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmul_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmul_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmul_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmul_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmul_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmul_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmul_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmul_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmul_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmul_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmul_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmul_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmul_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmul_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmul_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vmul_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmul_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmul_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmul_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmul_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmul_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmul_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmul_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmul_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmul_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmul_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmul_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmul_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmul_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmul_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmul_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmul_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmul_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmul_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmul_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmul_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmul_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmul_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmul_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmul_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmul_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmul_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmul_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmul_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmul_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmul_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmul_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmul_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmul_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmul_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmul_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmul_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmul_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmul_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmul_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmul_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmul_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmul_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmul_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmul_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmul_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmul_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmul_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmul_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmul_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vmul_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmul_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmul_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmul_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmul_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vmul_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmul_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmul_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmul_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmul_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vmul_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmul_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmul_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmul_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmul_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmul_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmul_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmul_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmul_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmul_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmul_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmul_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmul_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmul_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmul_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmul_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmul_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmul_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmul_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmul_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vmul_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmul_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmul_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmul_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmul_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vmul_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmul_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmul_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmul_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmul_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmul_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmul_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmul_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmul_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmul_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmul_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmul_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmul_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vmul_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmul_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmul_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmul_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmul_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmul_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmul_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmul_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmul_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmul_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmul_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmul_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmul_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmul_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmul_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmul_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmul_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmul_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmul_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmul_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmul_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmul_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmul_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmul_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmul_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmul_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmul_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmul_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmul_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmul_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmul_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmul_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmul_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmul_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmul_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmul_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmul_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmul_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmul_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmul_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmul_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmul_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmul_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmul_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmul_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmul_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmul_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmul_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmul_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmul_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmul_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmul_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmul_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmul_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmul_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmul_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmul_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmul_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmul_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmul_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmul_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmul_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmul_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmul_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmul_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmul_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmul_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmul_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmul_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmul_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmul_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmul_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmul_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmul_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmul_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmul_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmul_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmul_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmul_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmul_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmul_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmul_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmul_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmul_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmul_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmul_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmul_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmul_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmul_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmul_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmul_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmul_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmul_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmul_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmul_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmul_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmul_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmul_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmul_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmul_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmul_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmul_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmul_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmul_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmul_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmul_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmul_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmul_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmul_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmul_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmul_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vmul_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmul_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vmul_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmul_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmul_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmul_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmul_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vmul_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmul_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmul_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmul_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmul_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vmul_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmul_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmul_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmul_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmul_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmul_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmul_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmul_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmul_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmul_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmul_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmul_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmul_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmul_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmul_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmul_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmul_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmul_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmul_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmul_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmul_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmul_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmul_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmul_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmul_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmul_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmul_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmul_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmul_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmul_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmul_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmul_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmul_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmul_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmul_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vmul_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmul_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmul_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmul_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmul_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmul_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmul_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmul_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmul_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmul_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmul_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmul_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmul_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmul_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmul_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmul_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmul_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmul_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmul_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmul_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmul_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmul_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmul_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmul_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmul_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmul_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vmul_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmul_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmul_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmul_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmul_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vmul_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmul_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmul_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmul_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmul_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmul_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmul_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmul_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmul_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmul_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmul_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmul_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmul_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmul_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmul_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vmul_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmul_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmul_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vmul_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmul_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmul_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vmul_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmul_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmul_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmul_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmul_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmul_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmulh.c b/auto-generated/policy_funcs/llvm-api-tests/vmulh.c index 51bab4943..4e947a571 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmulh.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmulh.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vmulh_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmulh_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vmulh_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vmulh_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmulh_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmulh_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vmulh_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmulh_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vmulh_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vmulh_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmulh_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmulh_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vmulh_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmulh_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vmulh_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmulh_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmulh_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vmulh_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmulh_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vmulh_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vmulh_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmulh_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmulh_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vmulh_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmulh_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vmulh_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmulh_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmulh_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vmulh_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmulh_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vmulh_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vmulh_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmulh_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmulh_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vmulh_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmulh_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vmulh_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vmulh_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmulh_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vmulh_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vmulh_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmulh_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vmulh_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vmulh_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmulh_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vmulh_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmulh_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vmulh_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vmulh_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmulh_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vmulh_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmulh_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vmulh_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmulh_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmulh_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vmulh_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmulh_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vmulh_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmulh_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmulh_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vmulh_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmulh_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vmulh_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmulh_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmulh_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vmulh_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmulh_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vmulh_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmulh_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vmulh_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vmulh_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmulh_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vmulh_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vmulh_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmulh_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vmulh_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmulh_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vmulh_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vmulh_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmulh_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmulh_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vmulh_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmulh_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vmulh_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmulh_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmulh_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vmulh_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmulh_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vmulh_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vmulh_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmulh_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmulh_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vmulh_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmulh_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vmulh_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vmulh_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmulh_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vmulh_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vmulh_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmulh_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vmulh_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vmulh_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmulh_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmulh_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vmulh_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmulh_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vmulh_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmulh_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmulh_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vmulh_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmulh_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vmulh_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vmulh_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmulh_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmulh_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vmulh_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmulh_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vmulh_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vmulh_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmulh_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vmulh_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vmulh_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmulh_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmulh_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmulh_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmulh_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmulh_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmulh_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmulh_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmulh_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmulh_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmulh_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmulh_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmulh_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmulh_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmulh_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmulh_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmulh_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmulh_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmulh_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmulh_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmulh_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmulh_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmulh_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmulh_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmulh_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmulh_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmulh_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmulh_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmulh_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmulh_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmulh_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmulh_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmulh_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmulh_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmulh_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmulh_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmulh_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmulh_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmulh_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmulh_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmulh_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmulh_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmulh_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmulh_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmulh_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmulh_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmulh_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmulh_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmulh_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmulh_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmulh_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmulh_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmulh_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmulh_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmulh_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmulh_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmulh_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmulh_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmulh_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmulh_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmulh_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmulh_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmulh_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmulh_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmulh_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmulh_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmulh_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmulh_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmulh_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmulh_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmulh_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmulh_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmulh_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmulh_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmulh_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmulh_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmulh_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmulh_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmulh_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmulh_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmulh_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmulh_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmulh_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmulh_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmulh_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmulh_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmulh_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmulh_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmulh_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmulh_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmulh_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmulh_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmulh_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmulh_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmulh_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmulh_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmulh_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmulh_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmulh_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmulh_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmulh_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmulh_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmulh_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmulh_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmulh_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmulh_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmulh_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmulh_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmulh_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmulh_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmulh_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmulh_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmulh_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmulh_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmulh_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmulh_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmulh_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmulh_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmulh_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmulh_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmulh_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmulh_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmulh_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmulh_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmulh_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmulh_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmulh_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmulh_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmulh_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmulh_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmulh_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmulh_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmulh_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmulh_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmulh_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmulh_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmulh_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmulh_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmulh_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmulh_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmulh_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmulh_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmulh_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmulh_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmulh_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmulh_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmulh_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmulh_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmulh_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmulh_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmulh_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmulh_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmulh_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmulh_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmulh_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmulh_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmulh_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmulh_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmulh_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmulh_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmulh_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmulh_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmulh_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmulh_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmulh_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmulh_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmulh_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmulh_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmulh_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmulh_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmulh_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmulh_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmulh_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmulh_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmulh_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmulh_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmulh_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmulh_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmulh_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vmulh_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmulh_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmulh_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmulh_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vmulh_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmulh_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmulh_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vmulh_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmulh_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vmulh_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmulh_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vmulh_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmulh_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vmulh_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmulh_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vmulh_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmulh_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vmulh_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmulh_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vmulh_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmulh_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vmulh_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmulh_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vmulh_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmulh_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vmulh_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vmulh_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmulh_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmulh_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmulh_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vmulh_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmulh_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmulh_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmulh_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vmulh_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmulh_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vmulh_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmulh_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vmulh_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmulh_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vmulh_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmulh_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vmulh_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmulh_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vmulh_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmulh_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vmulh_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmulh_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vmulh_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmulh_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vmulh_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vmulh_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmulh_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmulh_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vmulh_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmulh_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vmulh_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmulh_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vmulh_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmulh_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vmulh_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmulh_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vmulh_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmulh_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vmulh_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmulh_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vmulh_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmulh_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vmulh_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmulh_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vmulh_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmulh_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vmulh_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vmulh_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmulh_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vmulh_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmulh_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vmulh_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmulh_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vmulh_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmulh_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vmulh_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmulh_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vmulh_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmulh_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vmulh_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmulh_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vmulh_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vmulh_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmulh_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vmulh_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vmulh_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmulhsu.c b/auto-generated/policy_funcs/llvm-api-tests/vmulhsu.c index 0380bbc60..f6636ff5a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmulhsu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmulhsu.c @@ -5,706 +5,924 @@ #include -vint8mf8_t test_vmulhsu_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmulhsu_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vmulhsu_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, uint8_t rs1, size_t vl) { +vint8mf8_t test_vmulhsu_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vmulhsu_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmulhsu_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vmulhsu_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, uint8_t rs1, size_t vl) { +vint8mf4_t test_vmulhsu_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vmulhsu_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmulhsu_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vmulhsu_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, uint8_t rs1, size_t vl) { +vint8mf2_t test_vmulhsu_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vmulhsu_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vmulhsu_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vmulhsu_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, uint8_t rs1, size_t vl) { +vint8m1_t test_vmulhsu_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vmulhsu_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vmulhsu_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vmulhsu_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, uint8_t rs1, size_t vl) { +vint8m2_t test_vmulhsu_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vmulhsu_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vmulhsu_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vmulhsu_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, uint8_t rs1, size_t vl) { +vint8m4_t test_vmulhsu_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vmulhsu_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vmulhsu_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vmulhsu_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, uint8_t rs1, size_t vl) { +vint8m8_t test_vmulhsu_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vmulhsu_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmulhsu_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vmulhsu_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, uint16_t rs1, size_t vl) { +vint16mf4_t test_vmulhsu_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vmulhsu_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmulhsu_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vmulhsu_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, uint16_t rs1, size_t vl) { +vint16mf2_t test_vmulhsu_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vmulhsu_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vmulhsu_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vmulhsu_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, uint16_t rs1, size_t vl) { +vint16m1_t test_vmulhsu_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vmulhsu_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vmulhsu_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vmulhsu_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, uint16_t rs1, size_t vl) { +vint16m2_t test_vmulhsu_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vmulhsu_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vmulhsu_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vmulhsu_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, uint16_t rs1, size_t vl) { +vint16m4_t test_vmulhsu_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vmulhsu_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vmulhsu_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vmulhsu_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, uint16_t rs1, size_t vl) { +vint16m8_t test_vmulhsu_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vmulhsu_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmulhsu_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vmulhsu_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, uint32_t rs1, size_t vl) { +vint32mf2_t test_vmulhsu_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vmulhsu_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vmulhsu_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vmulhsu_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, uint32_t rs1, size_t vl) { +vint32m1_t test_vmulhsu_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vmulhsu_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vmulhsu_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vmulhsu_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, uint32_t rs1, size_t vl) { +vint32m2_t test_vmulhsu_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vmulhsu_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vmulhsu_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vmulhsu_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, uint32_t rs1, size_t vl) { +vint32m4_t test_vmulhsu_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vmulhsu_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vmulhsu_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vmulhsu_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, uint32_t rs1, size_t vl) { +vint32m8_t test_vmulhsu_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vmulhsu_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vmulhsu_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vmulhsu_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, uint64_t rs1, size_t vl) { +vint64m1_t test_vmulhsu_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vmulhsu_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vmulhsu_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vmulhsu_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, uint64_t rs1, size_t vl) { +vint64m2_t test_vmulhsu_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vmulhsu_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vmulhsu_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vmulhsu_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, uint64_t rs1, size_t vl) { +vint64m4_t test_vmulhsu_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vmulhsu_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vmulhsu_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vmulhsu_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, uint64_t rs1, size_t vl) { +vint64m8_t test_vmulhsu_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vmulhsu_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmulhsu_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmulhsu_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, uint8_t rs1, size_t vl) { +vint8mf8_t test_vmulhsu_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmulhsu_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmulhsu_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmulhsu_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, uint8_t rs1, size_t vl) { +vint8mf4_t test_vmulhsu_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmulhsu_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmulhsu_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmulhsu_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, uint8_t rs1, size_t vl) { +vint8mf2_t test_vmulhsu_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmulhsu_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vmulhsu_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmulhsu_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, uint8_t rs1, size_t vl) { +vint8m1_t test_vmulhsu_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmulhsu_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vmulhsu_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmulhsu_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, uint8_t rs1, size_t vl) { +vint8m2_t test_vmulhsu_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmulhsu_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vmulhsu_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmulhsu_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, uint8_t rs1, size_t vl) { +vint8m4_t test_vmulhsu_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmulhsu_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vmulhsu_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmulhsu_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, uint8_t rs1, size_t vl) { +vint8m8_t test_vmulhsu_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmulhsu_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmulhsu_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmulhsu_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, uint16_t rs1, size_t vl) { +vint16mf4_t test_vmulhsu_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmulhsu_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmulhsu_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmulhsu_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, uint16_t rs1, size_t vl) { +vint16mf2_t test_vmulhsu_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmulhsu_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vmulhsu_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmulhsu_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, uint16_t rs1, size_t vl) { +vint16m1_t test_vmulhsu_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmulhsu_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vmulhsu_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmulhsu_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, uint16_t rs1, size_t vl) { +vint16m2_t test_vmulhsu_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmulhsu_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vmulhsu_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmulhsu_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, uint16_t rs1, size_t vl) { +vint16m4_t test_vmulhsu_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmulhsu_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vmulhsu_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmulhsu_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, uint16_t rs1, size_t vl) { +vint16m8_t test_vmulhsu_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmulhsu_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmulhsu_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmulhsu_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, uint32_t rs1, size_t vl) { +vint32mf2_t test_vmulhsu_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmulhsu_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vmulhsu_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmulhsu_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, uint32_t rs1, size_t vl) { +vint32m1_t test_vmulhsu_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmulhsu_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vmulhsu_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmulhsu_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, uint32_t rs1, size_t vl) { +vint32m2_t test_vmulhsu_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmulhsu_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vmulhsu_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmulhsu_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, uint32_t rs1, size_t vl) { +vint32m4_t test_vmulhsu_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmulhsu_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vmulhsu_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmulhsu_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, uint32_t rs1, size_t vl) { +vint32m8_t test_vmulhsu_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmulhsu_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vmulhsu_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmulhsu_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, uint64_t rs1, size_t vl) { +vint64m1_t test_vmulhsu_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmulhsu_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vmulhsu_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmulhsu_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, uint64_t rs1, size_t vl) { +vint64m2_t test_vmulhsu_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmulhsu_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vmulhsu_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmulhsu_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, uint64_t rs1, size_t vl) { +vint64m4_t test_vmulhsu_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmulhsu_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vmulhsu_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmulhsu_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, uint64_t rs1, size_t vl) { +vint64m8_t test_vmulhsu_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmulhsu_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmulhsu_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmulhsu_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, uint8_t rs1, size_t vl) { +vint8mf8_t test_vmulhsu_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmulhsu_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmulhsu_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmulhsu_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, uint8_t rs1, size_t vl) { +vint8mf4_t test_vmulhsu_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmulhsu_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmulhsu_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmulhsu_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, uint8_t rs1, size_t vl) { +vint8mf2_t test_vmulhsu_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmulhsu_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vmulhsu_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmulhsu_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, uint8_t rs1, size_t vl) { +vint8m1_t test_vmulhsu_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmulhsu_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vmulhsu_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmulhsu_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, uint8_t rs1, size_t vl) { +vint8m2_t test_vmulhsu_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmulhsu_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vmulhsu_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmulhsu_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, uint8_t rs1, size_t vl) { +vint8m4_t test_vmulhsu_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmulhsu_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vmulhsu_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmulhsu_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, uint8_t rs1, size_t vl) { +vint8m8_t test_vmulhsu_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmulhsu_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmulhsu_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmulhsu_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, uint16_t rs1, size_t vl) { +vint16mf4_t test_vmulhsu_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmulhsu_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmulhsu_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmulhsu_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, uint16_t rs1, size_t vl) { +vint16mf2_t test_vmulhsu_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmulhsu_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vmulhsu_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmulhsu_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, uint16_t rs1, size_t vl) { +vint16m1_t test_vmulhsu_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmulhsu_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vmulhsu_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmulhsu_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, uint16_t rs1, size_t vl) { +vint16m2_t test_vmulhsu_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmulhsu_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vmulhsu_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmulhsu_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, uint16_t rs1, size_t vl) { +vint16m4_t test_vmulhsu_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmulhsu_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vmulhsu_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmulhsu_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, uint16_t rs1, size_t vl) { +vint16m8_t test_vmulhsu_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmulhsu_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmulhsu_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmulhsu_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, uint32_t rs1, size_t vl) { +vint32mf2_t test_vmulhsu_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmulhsu_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vmulhsu_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmulhsu_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, uint32_t rs1, size_t vl) { +vint32m1_t test_vmulhsu_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmulhsu_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vmulhsu_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmulhsu_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, uint32_t rs1, size_t vl) { +vint32m2_t test_vmulhsu_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmulhsu_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vmulhsu_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmulhsu_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, uint32_t rs1, size_t vl) { +vint32m4_t test_vmulhsu_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmulhsu_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vmulhsu_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmulhsu_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, uint32_t rs1, size_t vl) { +vint32m8_t test_vmulhsu_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmulhsu_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vmulhsu_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmulhsu_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, uint64_t rs1, size_t vl) { +vint64m1_t test_vmulhsu_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmulhsu_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vmulhsu_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmulhsu_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, uint64_t rs1, size_t vl) { +vint64m2_t test_vmulhsu_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmulhsu_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vmulhsu_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmulhsu_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, uint64_t rs1, size_t vl) { +vint64m4_t test_vmulhsu_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmulhsu_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vmulhsu_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmulhsu_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, uint64_t rs1, size_t vl) { +vint64m8_t test_vmulhsu_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vmulhsu_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vmulhsu_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vmulhsu_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, uint8_t rs1, size_t vl) { +vint8mf8_t test_vmulhsu_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vmulhsu_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vmulhsu_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vmulhsu_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, uint8_t rs1, size_t vl) { +vint8mf4_t test_vmulhsu_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vmulhsu_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vmulhsu_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vmulhsu_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, uint8_t rs1, size_t vl) { +vint8mf2_t test_vmulhsu_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vmulhsu_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vmulhsu_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vmulhsu_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, uint8_t rs1, size_t vl) { +vint8m1_t test_vmulhsu_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vmulhsu_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vmulhsu_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vmulhsu_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, uint8_t rs1, size_t vl) { +vint8m2_t test_vmulhsu_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vmulhsu_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vmulhsu_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vmulhsu_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, uint8_t rs1, size_t vl) { +vint8m4_t test_vmulhsu_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vmulhsu_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vmulhsu_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vmulhsu_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, uint8_t rs1, size_t vl) { +vint8m8_t test_vmulhsu_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vmulhsu_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vmulhsu_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vmulhsu_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, uint16_t rs1, size_t vl) { +vint16mf4_t test_vmulhsu_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vmulhsu_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vmulhsu_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vmulhsu_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, uint16_t rs1, size_t vl) { +vint16mf2_t test_vmulhsu_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vmulhsu_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vmulhsu_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vmulhsu_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, uint16_t rs1, size_t vl) { +vint16m1_t test_vmulhsu_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vmulhsu_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vmulhsu_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vmulhsu_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, uint16_t rs1, size_t vl) { +vint16m2_t test_vmulhsu_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vmulhsu_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vmulhsu_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vmulhsu_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, uint16_t rs1, size_t vl) { +vint16m4_t test_vmulhsu_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vmulhsu_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vmulhsu_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vmulhsu_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, uint16_t rs1, size_t vl) { +vint16m8_t test_vmulhsu_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vmulhsu_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vmulhsu_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmulhsu_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vmulhsu_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, uint32_t rs1, size_t vl) { +vint32mf2_t test_vmulhsu_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhsu_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vmulhsu_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vmulhsu_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vmulhsu_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, uint32_t rs1, size_t vl) { +vint32m1_t test_vmulhsu_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vmulhsu_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vmulhsu_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vmulhsu_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, uint32_t rs1, size_t vl) { +vint32m2_t test_vmulhsu_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vmulhsu_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vmulhsu_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vmulhsu_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, uint32_t rs1, size_t vl) { +vint32m4_t test_vmulhsu_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vmulhsu_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vmulhsu_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vmulhsu_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, uint32_t rs1, size_t vl) { +vint32m8_t test_vmulhsu_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vmulhsu_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vmulhsu_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vmulhsu_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, uint64_t rs1, size_t vl) { +vint64m1_t test_vmulhsu_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vmulhsu_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vmulhsu_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vmulhsu_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, uint64_t rs1, size_t vl) { +vint64m2_t test_vmulhsu_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vmulhsu_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vmulhsu_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vmulhsu_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, uint64_t rs1, size_t vl) { +vint64m4_t test_vmulhsu_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vmulhsu_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vmulhsu_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmulhsu_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vmulhsu_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, uint64_t rs1, size_t vl) { +vint64m8_t test_vmulhsu_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhsu_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmulhu.c b/auto-generated/policy_funcs/llvm-api-tests/vmulhu.c index 7ec40fbbe..3cefc3a1e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmulhu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmulhu.c @@ -5,706 +5,957 @@ #include -vuint8mf8_t test_vmulhu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmulhu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vmulhu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmulhu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vmulhu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmulhu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vmulhu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmulhu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vmulhu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmulhu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vmulhu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmulhu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vmulhu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmulhu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vmulhu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmulhu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vmulhu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmulhu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vmulhu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmulhu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vmulhu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmulhu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vmulhu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmulhu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vmulhu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmulhu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vmulhu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmulhu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vmulhu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmulhu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vmulhu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vmulhu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmulhu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vmulhu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmulhu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vmulhu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmulhu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vmulhu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmulhu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vmulhu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vmulhu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmulhu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vmulhu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmulhu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vmulhu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmulhu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vmulhu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmulhu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vmulhu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vmulhu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmulhu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vmulhu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmulhu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vmulhu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vmulhu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmulhu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vmulhu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmulhu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vmulhu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmulhu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vmulhu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmulhu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vmulhu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vmulhu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmulhu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vmulhu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmulhu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vmulhu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmulhu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vmulhu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmulhu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vmulhu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vmulhu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmulhu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vmulhu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmulhu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vmulhu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vmulhu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmulhu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vmulhu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmulhu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vmulhu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vmulhu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmulhu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vmulhu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmulhu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vmulhu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmulhu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vmulhu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmulhu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vmulhu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vmulhu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmulhu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vmulhu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmulhu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vmulhu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vmulhu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmulhu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vmulhu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmulhu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmulhu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmulhu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmulhu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmulhu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmulhu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmulhu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmulhu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmulhu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmulhu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmulhu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmulhu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmulhu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmulhu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmulhu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmulhu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmulhu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmulhu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmulhu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmulhu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmulhu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmulhu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmulhu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmulhu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmulhu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmulhu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmulhu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmulhu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmulhu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmulhu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmulhu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmulhu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmulhu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmulhu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmulhu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmulhu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmulhu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmulhu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmulhu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmulhu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmulhu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmulhu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmulhu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmulhu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmulhu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmulhu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmulhu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmulhu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmulhu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmulhu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmulhu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmulhu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmulhu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmulhu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmulhu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmulhu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmulhu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmulhu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmulhu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmulhu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmulhu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmulhu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmulhu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmulhu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmulhu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmulhu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmulhu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmulhu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmulhu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmulhu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmulhu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmulhu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmulhu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmulhu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmulhu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmulhu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmulhu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmulhu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmulhu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmulhu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmulhu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmulhu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmulhu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmulhu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmulhu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmulhu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmulhu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vmulhu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmulhu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmulhu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmulhu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmulhu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmulhu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmulhu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmulhu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmulhu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmulhu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmulhu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmulhu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmulhu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmulhu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmulhu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmulhu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmulhu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmulhu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmulhu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmulhu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmulhu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmulhu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmulhu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmulhu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmulhu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmulhu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmulhu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmulhu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmulhu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmulhu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmulhu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmulhu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmulhu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmulhu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmulhu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmulhu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmulhu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmulhu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmulhu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmulhu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmulhu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmulhu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmulhu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmulhu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmulhu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmulhu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmulhu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmulhu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmulhu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmulhu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmulhu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmulhu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmulhu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmulhu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmulhu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmulhu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmulhu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmulhu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmulhu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmulhu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmulhu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmulhu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmulhu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmulhu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmulhu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmulhu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmulhu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmulhu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmulhu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmulhu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmulhu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmulhu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmulhu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmulhu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmulhu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmulhu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmulhu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmulhu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmulhu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmulhu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmulhu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmulhu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmulhu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmulhu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmulhu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmulhu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmulhu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmulhu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vmulhu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vmulhu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vmulhu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vmulhu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vmulhu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vmulhu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vmulhu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vmulhu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vmulhu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vmulhu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vmulhu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vmulhu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vmulhu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vmulhu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vmulhu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vmulhu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vmulhu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vmulhu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vmulhu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vmulhu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vmulhu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vmulhu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vmulhu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vmulhu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vmulhu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vmulhu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vmulhu_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vmulhu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vmulhu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vmulhu_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vmulhu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmulhu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vmulhu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vmulhu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vmulhu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmulhu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vmulhu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vmulhu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vmulhu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vmulhu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vmulhu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vmulhu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vmulhu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vmulhu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vmulhu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vmulhu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vmulhu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vmulhu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vmulhu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vmulhu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vmulhu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vmulhu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vmulhu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vmulhu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vmulhu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vmulhu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmulhu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vmulhu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vmulhu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vmulhu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vmulhu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vmulhu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vmulhu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vmulhu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vmulhu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vmulhu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vmulhu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vmulhu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vmulhu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vmulhu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vmulhu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vmulhu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vmulhu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vmulhu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vmulhu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vmulhu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vmulhu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vmulhu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vmulhu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vmulhu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vmulhu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vmulhu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vmulhu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vmulhu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vmulhu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vmulhu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vmulhu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vmulhu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vmulhu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vmulhu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vmulhu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vmulhu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vmulhu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vmulhu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vmulhu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmv.c b/auto-generated/policy_funcs/llvm-api-tests/vmv.c index c5ef5a6d2..bc366e749 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmv.c @@ -238,7 +238,8 @@ vuint8m8_t test_vmv_v_x_u8m8_tu(vuint8m8_t vd, uint8_t rs1, size_t vl) { return __riscv_vmv_v_x_u8m8_tu(vd, rs1, vl); } -vuint16mf4_t test_vmv_v_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vmv_v_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, + size_t vl) { return __riscv_vmv_v_v_u16mf4_tu(vd, vs1, vl); } @@ -246,7 +247,8 @@ vuint16mf4_t test_vmv_v_x_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, size_t vl) { return __riscv_vmv_v_x_u16mf4_tu(vd, rs1, vl); } -vuint16mf2_t test_vmv_v_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vmv_v_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, + size_t vl) { return __riscv_vmv_v_v_u16mf2_tu(vd, vs1, vl); } @@ -286,7 +288,8 @@ vuint16m8_t test_vmv_v_x_u16m8_tu(vuint16m8_t vd, uint16_t rs1, size_t vl) { return __riscv_vmv_v_x_u16m8_tu(vd, rs1, vl); } -vuint32mf2_t test_vmv_v_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vmv_v_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, + size_t vl) { return __riscv_vmv_v_v_u32mf2_tu(vd, vs1, vl); } @@ -358,63 +361,78 @@ vuint64m8_t test_vmv_v_x_u64m8_tu(vuint64m8_t vd, uint64_t rs1, size_t vl) { return __riscv_vmv_v_x_u64m8_tu(vd, rs1, vl); } -vfloat16mf4_t test_vmv_v_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vmv_v_v_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs1, + size_t vl) { return __riscv_vmv_v_v_f16mf4_tu(vd, vs1, vl); } -vfloat16mf2_t test_vmv_v_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vmv_v_v_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs1, + size_t vl) { return __riscv_vmv_v_v_f16mf2_tu(vd, vs1, vl); } -vfloat16m1_t test_vmv_v_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, size_t vl) { +vfloat16m1_t test_vmv_v_v_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs1, + size_t vl) { return __riscv_vmv_v_v_f16m1_tu(vd, vs1, vl); } -vfloat16m2_t test_vmv_v_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, size_t vl) { +vfloat16m2_t test_vmv_v_v_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs1, + size_t vl) { return __riscv_vmv_v_v_f16m2_tu(vd, vs1, vl); } -vfloat16m4_t test_vmv_v_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, size_t vl) { +vfloat16m4_t test_vmv_v_v_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs1, + size_t vl) { return __riscv_vmv_v_v_f16m4_tu(vd, vs1, vl); } -vfloat16m8_t test_vmv_v_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, size_t vl) { +vfloat16m8_t test_vmv_v_v_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs1, + size_t vl) { return __riscv_vmv_v_v_f16m8_tu(vd, vs1, vl); } -vfloat32mf2_t test_vmv_v_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vmv_v_v_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs1, + size_t vl) { return __riscv_vmv_v_v_f32mf2_tu(vd, vs1, vl); } -vfloat32m1_t test_vmv_v_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, size_t vl) { +vfloat32m1_t test_vmv_v_v_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs1, + size_t vl) { return __riscv_vmv_v_v_f32m1_tu(vd, vs1, vl); } -vfloat32m2_t test_vmv_v_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, size_t vl) { +vfloat32m2_t test_vmv_v_v_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs1, + size_t vl) { return __riscv_vmv_v_v_f32m2_tu(vd, vs1, vl); } -vfloat32m4_t test_vmv_v_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, size_t vl) { +vfloat32m4_t test_vmv_v_v_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs1, + size_t vl) { return __riscv_vmv_v_v_f32m4_tu(vd, vs1, vl); } -vfloat32m8_t test_vmv_v_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, size_t vl) { +vfloat32m8_t test_vmv_v_v_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs1, + size_t vl) { return __riscv_vmv_v_v_f32m8_tu(vd, vs1, vl); } -vfloat64m1_t test_vmv_v_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, size_t vl) { +vfloat64m1_t test_vmv_v_v_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs1, + size_t vl) { return __riscv_vmv_v_v_f64m1_tu(vd, vs1, vl); } -vfloat64m2_t test_vmv_v_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, size_t vl) { +vfloat64m2_t test_vmv_v_v_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs1, + size_t vl) { return __riscv_vmv_v_v_f64m2_tu(vd, vs1, vl); } -vfloat64m4_t test_vmv_v_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, size_t vl) { +vfloat64m4_t test_vmv_v_v_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs1, + size_t vl) { return __riscv_vmv_v_v_f64m4_tu(vd, vs1, vl); } -vfloat64m8_t test_vmv_v_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, size_t vl) { +vfloat64m8_t test_vmv_v_v_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs1, + size_t vl) { return __riscv_vmv_v_v_f64m8_tu(vd, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnclip.c b/auto-generated/policy_funcs/llvm-api-tests/vnclip.c index 531e73482..6073448dc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnclip.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnclip.c @@ -5,482 +5,619 @@ #include -vint8mf8_t test_vnclip_wv_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vnclip_wv_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vnclip_wv_i8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vnclip_wx_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vnclip_wx_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vnclip_wv_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vnclip_wv_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vnclip_wv_i8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vnclip_wx_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vnclip_wx_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vnclip_wv_i8mf2_tu(vint8mf2_t vd, vint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vnclip_wv_i8mf2_tu(vint8mf2_t vd, vint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnclip_wv_i8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vnclip_wx_i8mf2_tu(vint8mf2_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vnclip_wx_i8mf2_tu(vint8mf2_t vd, vint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vnclip_wv_i8m1_tu(vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vnclip_wv_i8m1_tu(vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vnclip_wv_i8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vnclip_wx_i8m1_tu(vint8m1_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vnclip_wx_i8m1_tu(vint8m1_t vd, vint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vnclip_wv_i8m2_tu(vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vnclip_wv_i8m2_tu(vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vnclip_wx_i8m2_tu(vint8m2_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vnclip_wx_i8m2_tu(vint8m2_t vd, vint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vnclip_wv_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vnclip_wv_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vnclip_wv_i8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vnclip_wx_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vnclip_wx_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vnclip_wv_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vnclip_wv_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vnclip_wv_i16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vnclip_wx_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vnclip_wx_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vnclip_wv_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vnclip_wv_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vnclip_wv_i16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vnclip_wx_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vnclip_wx_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vnclip_wv_i16m1_tu(vint16m1_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vnclip_wv_i16m1_tu(vint16m1_t vd, vint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vnclip_wx_i16m1_tu(vint16m1_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vnclip_wx_i16m1_tu(vint16m1_t vd, vint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vnclip_wv_i16m2_tu(vint16m2_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vnclip_wv_i16m2_tu(vint16m2_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vnclip_wx_i16m2_tu(vint16m2_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vnclip_wx_i16m2_tu(vint16m2_t vd, vint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vnclip_wv_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vnclip_wv_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vnclip_wx_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vnclip_wx_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vnclip_wv_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vnclip_wv_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vnclip_wv_i32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vnclip_wx_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vnclip_wx_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vnclip_wv_i32m1_tu(vint32m1_t vd, vint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vnclip_wv_i32m1_tu(vint32m1_t vd, vint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vnclip_wx_i32m1_tu(vint32m1_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vnclip_wx_i32m1_tu(vint32m1_t vd, vint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vnclip_wv_i32m2_tu(vint32m2_t vd, vint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vnclip_wv_i32m2_tu(vint32m2_t vd, vint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vnclip_wx_i32m2_tu(vint32m2_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vnclip_wx_i32m2_tu(vint32m2_t vd, vint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vnclip_wv_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vnclip_wv_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vnclip_wx_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vnclip_wx_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclip_wx_i32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vnclip_wv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vnclip_wv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnclip_wv_i8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vnclip_wx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vnclip_wx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vnclip_wv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vnclip_wv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnclip_wv_i8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vnclip_wx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vnclip_wx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vnclip_wv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vnclip_wv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnclip_wv_i8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vnclip_wx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vnclip_wx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vnclip_wv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vnclip_wv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vnclip_wx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vnclip_wx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vnclip_wv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vnclip_wv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vnclip_wx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vnclip_wx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vnclip_wv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vnclip_wv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vnclip_wx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vnclip_wx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vnclip_wv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vnclip_wv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnclip_wv_i16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vnclip_wx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vnclip_wx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vnclip_wv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vnclip_wv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vnclip_wx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vnclip_wx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vnclip_wv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vnclip_wv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vnclip_wx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vnclip_wx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vnclip_wv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vnclip_wv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vnclip_wx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vnclip_wx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vnclip_wv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vnclip_wv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vnclip_wx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vnclip_wx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vnclip_wv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vnclip_wv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vnclip_wx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vnclip_wx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vnclip_wv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vnclip_wv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vnclip_wx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vnclip_wx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vnclip_wv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vnclip_wv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vnclip_wx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vnclip_wx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vnclip_wv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vnclip_wv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vnclip_wx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vnclip_wx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vnclip_wv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vnclip_wv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnclip_wv_i8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vnclip_wx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vnclip_wx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vnclip_wv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vnclip_wv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnclip_wv_i8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vnclip_wx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vnclip_wx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vnclip_wv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vnclip_wv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vnclip_wx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vnclip_wx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vnclip_wv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vnclip_wv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vnclip_wx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vnclip_wx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vnclip_wv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vnclip_wv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vnclip_wx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vnclip_wx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vnclip_wv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vnclip_wv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vnclip_wx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vnclip_wx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vnclip_wv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vnclip_wv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnclip_wv_i16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vnclip_wx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vnclip_wx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vnclip_wv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vnclip_wv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vnclip_wx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vnclip_wx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vnclip_wv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vnclip_wv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vnclip_wv_i16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vnclip_wx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vnclip_wx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vnclip_wv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vnclip_wv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vnclip_wx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vnclip_wx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vnclip_wv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vnclip_wv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vnclip_wx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vnclip_wx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vnclip_wv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vnclip_wv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vnclip_wx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vnclip_wx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vnclip_wv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vnclip_wv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vnclip_wv_i32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vnclip_wx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vnclip_wx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vnclip_wv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vnclip_wv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vnclip_wx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vnclip_wx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vnclip_wv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vnclip_wv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vnclip_wx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vnclip_wx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vnclip_wv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vnclip_wv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vnclip_wv_i8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vnclip_wx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vnclip_wx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vnclip_wv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vnclip_wv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vnclip_wv_i8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vnclip_wx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vnclip_wx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vnclip_wv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vnclip_wv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnclip_wv_i8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vnclip_wx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vnclip_wx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vnclip_wv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vnclip_wv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vnclip_wx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vnclip_wx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vnclip_wv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vnclip_wv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vnclip_wx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vnclip_wx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vnclip_wv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vnclip_wv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vnclip_wx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vnclip_wx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vnclip_wv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vnclip_wv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnclip_wv_i16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vnclip_wx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vnclip_wx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vnclip_wv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vnclip_wv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vnclip_wx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vnclip_wx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vnclip_wv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vnclip_wv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vnclip_wx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vnclip_wx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vnclip_wv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vnclip_wv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vnclip_wx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vnclip_wx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vnclip_wv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vnclip_wv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vnclip_wx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vnclip_wx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vnclip_wv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vnclip_wv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnclip_wv_i32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vnclip_wx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vnclip_wx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vnclip_wv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vnclip_wv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vnclip_wx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vnclip_wx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vnclip_wv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vnclip_wv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vnclip_wx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vnclip_wx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vnclip_wv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vnclip_wv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnclip_wv_i32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vnclip_wx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vnclip_wx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclip_wx_i32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnclipu.c b/auto-generated/policy_funcs/llvm-api-tests/vnclipu.c index 037f6d52e..539781050 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnclipu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnclipu.c @@ -5,482 +5,650 @@ #include -vuint8mf8_t test_vnclipu_wv_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vnclipu_wv_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vnclipu_wx_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vnclipu_wx_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vnclipu_wv_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vnclipu_wv_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vnclipu_wx_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vnclipu_wx_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vnclipu_wv_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vnclipu_wv_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vnclipu_wx_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vnclipu_wx_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vnclipu_wv_u8m1_tu(vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vnclipu_wv_u8m1_tu(vuint8m1_t vd, vuint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vnclipu_wx_u8m1_tu(vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vnclipu_wx_u8m1_tu(vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vnclipu_wv_u8m2_tu(vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vnclipu_wv_u8m2_tu(vuint8m2_t vd, vuint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vnclipu_wx_u8m2_tu(vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vnclipu_wx_u8m2_tu(vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vnclipu_wv_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vnclipu_wv_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vnclipu_wx_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vnclipu_wx_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vnclipu_wv_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vnclipu_wv_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vnclipu_wv_u16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vnclipu_wx_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vnclipu_wx_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vnclipu_wv_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vnclipu_wv_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vnclipu_wv_u16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vnclipu_wx_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vnclipu_wx_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vnclipu_wv_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vnclipu_wv_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnclipu_wv_u16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vnclipu_wx_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vnclipu_wx_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vnclipu_wv_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vnclipu_wv_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnclipu_wv_u16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vnclipu_wx_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vnclipu_wx_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vnclipu_wv_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vnclipu_wv_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnclipu_wv_u16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vnclipu_wx_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vnclipu_wx_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vnclipu_wv_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vnclipu_wv_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vnclipu_wv_u32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vnclipu_wx_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vnclipu_wx_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vnclipu_wv_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vnclipu_wv_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnclipu_wv_u32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vnclipu_wx_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vnclipu_wx_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vnclipu_wv_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vnclipu_wv_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnclipu_wv_u32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vnclipu_wx_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vnclipu_wx_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vnclipu_wv_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vnclipu_wv_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnclipu_wv_u32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vnclipu_wx_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vnclipu_wx_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vnclipu_wv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vnclipu_wv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vnclipu_wx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vnclipu_wx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vnclipu_wv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vnclipu_wv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vnclipu_wx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vnclipu_wx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vnclipu_wv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vnclipu_wv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vnclipu_wx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vnclipu_wx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vnclipu_wv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vnclipu_wv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vnclipu_wx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vnclipu_wx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vnclipu_wv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vnclipu_wv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vnclipu_wx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vnclipu_wx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vnclipu_wv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vnclipu_wv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vnclipu_wx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vnclipu_wx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vnclipu_wv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vnclipu_wv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vnclipu_wx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vnclipu_wx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vnclipu_wv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vnclipu_wv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vnclipu_wx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vnclipu_wx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vnclipu_wv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vnclipu_wv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vnclipu_wx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vnclipu_wx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vnclipu_wv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vnclipu_wv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vnclipu_wx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vnclipu_wx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vnclipu_wv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vnclipu_wv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vnclipu_wx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vnclipu_wx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vnclipu_wv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vnclipu_wv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vnclipu_wx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vnclipu_wx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vnclipu_wv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vnclipu_wv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vnclipu_wx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vnclipu_wx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vnclipu_wv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vnclipu_wv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vnclipu_wx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vnclipu_wx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vnclipu_wv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vnclipu_wv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vnclipu_wx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vnclipu_wx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vnclipu_wv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vnclipu_wv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vnclipu_wx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vnclipu_wx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vnclipu_wv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vnclipu_wv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vnclipu_wx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vnclipu_wx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vnclipu_wv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vnclipu_wv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vnclipu_wx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vnclipu_wx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vnclipu_wv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vnclipu_wv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vnclipu_wx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vnclipu_wx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vnclipu_wv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vnclipu_wv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vnclipu_wx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vnclipu_wx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vnclipu_wv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vnclipu_wv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vnclipu_wx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vnclipu_wx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vnclipu_wv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vnclipu_wv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vnclipu_wx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vnclipu_wx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vnclipu_wv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vnclipu_wv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vnclipu_wx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vnclipu_wx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vnclipu_wv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vnclipu_wv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vnclipu_wx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vnclipu_wx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vnclipu_wv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vnclipu_wv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vnclipu_wx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vnclipu_wx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vnclipu_wv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vnclipu_wv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vnclipu_wx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vnclipu_wx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vnclipu_wv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vnclipu_wv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vnclipu_wx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vnclipu_wx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vnclipu_wv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vnclipu_wv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vnclipu_wx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vnclipu_wx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vnclipu_wv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vnclipu_wv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vnclipu_wx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vnclipu_wx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vnclipu_wv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vnclipu_wv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vnclipu_wx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vnclipu_wx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vnclipu_wv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vnclipu_wv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vnclipu_wx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vnclipu_wx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vnclipu_wv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vnclipu_wv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vnclipu_wx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vnclipu_wx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vnclipu_wv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vnclipu_wv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vnclipu_wx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vnclipu_wx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vnclipu_wv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vnclipu_wv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vnclipu_wx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vnclipu_wx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vnclipu_wv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vnclipu_wv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vnclipu_wx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vnclipu_wx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vnclipu_wv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vnclipu_wv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnclipu_wv_u8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vnclipu_wx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vnclipu_wx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vnclipu_wv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vnclipu_wv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vnclipu_wx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vnclipu_wx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnclipu_wx_u16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vnclipu_wv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vnclipu_wv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vnclipu_wx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vnclipu_wx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vnclipu_wv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vnclipu_wv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vnclipu_wx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vnclipu_wx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vnclipu_wv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vnclipu_wv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vnclipu_wx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vnclipu_wx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vnclipu_wv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vnclipu_wv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vnclipu_wx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vnclipu_wx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vnclipu_wv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vnclipu_wv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vnclipu_wx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vnclipu_wx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vnclipu_wv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vnclipu_wv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vnclipu_wx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vnclipu_wx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vnclipu_wv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vnclipu_wv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vnclipu_wx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vnclipu_wx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vnclipu_wv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vnclipu_wv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vnclipu_wv_u32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vnclipu_wx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vnclipu_wx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnclipu_wx_u32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vncvt.c b/auto-generated/policy_funcs/llvm-api-tests/vncvt.c index f2e94c366..fcaa4fa59 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vncvt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vncvt.c @@ -5,11 +5,13 @@ #include -vint8mf8_t test_vncvt_x_x_w_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, size_t vl) { +vint8mf8_t test_vncvt_x_x_w_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8mf8_tu(vd, vs2, vl); } -vint8mf4_t test_vncvt_x_x_w_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, size_t vl) { +vint8mf4_t test_vncvt_x_x_w_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8mf4_tu(vd, vs2, vl); } @@ -29,15 +31,18 @@ vint8m4_t test_vncvt_x_x_w_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8m4_tu(vd, vs2, vl); } -vuint8mf8_t test_vncvt_x_x_w_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vncvt_x_x_w_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u8mf8_tu(vd, vs2, vl); } -vuint8mf4_t test_vncvt_x_x_w_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vncvt_x_x_w_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u8mf4_tu(vd, vs2, vl); } -vuint8mf2_t test_vncvt_x_x_w_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, size_t vl) { +vuint8mf2_t test_vncvt_x_x_w_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u8mf2_tu(vd, vs2, vl); } @@ -53,11 +58,13 @@ vuint8m4_t test_vncvt_x_x_w_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8m4_tu(vd, vs2, vl); } -vint16mf4_t test_vncvt_x_x_w_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vint16mf4_t test_vncvt_x_x_w_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i16mf4_tu(vd, vs2, vl); } -vint16mf2_t test_vncvt_x_x_w_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, size_t vl) { +vint16mf2_t test_vncvt_x_x_w_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i16mf2_tu(vd, vs2, vl); } @@ -73,27 +80,33 @@ vint16m4_t test_vncvt_x_x_w_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16m4_tu(vd, vs2, vl); } -vuint16mf4_t test_vncvt_x_x_w_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vncvt_x_x_w_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vncvt_x_x_w_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vuint16mf2_t test_vncvt_x_x_w_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vncvt_x_x_w_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, size_t vl) { +vuint16m1_t test_vncvt_x_x_w_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vncvt_x_x_w_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, size_t vl) { +vuint16m2_t test_vncvt_x_x_w_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vncvt_x_x_w_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, size_t vl) { +vuint16m4_t test_vncvt_x_x_w_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u16m4_tu(vd, vs2, vl); } -vint32mf2_t test_vncvt_x_x_w_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, size_t vl) { +vint32mf2_t test_vncvt_x_x_w_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i32mf2_tu(vd, vs2, vl); } @@ -109,378 +122,472 @@ vint32m4_t test_vncvt_x_x_w_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m4_tu(vd, vs2, vl); } -vuint32mf2_t test_vncvt_x_x_w_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vuint32mf2_t test_vncvt_x_x_w_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vncvt_x_x_w_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, size_t vl) { +vuint32m1_t test_vncvt_x_x_w_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vncvt_x_x_w_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, size_t vl) { +vuint32m2_t test_vncvt_x_x_w_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vncvt_x_x_w_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, size_t vl) { +vuint32m4_t test_vncvt_x_x_w_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u32m4_tu(vd, vs2, vl); } -vint8mf8_t test_vncvt_x_x_w_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t vl) { +vint8mf8_t test_vncvt_x_x_w_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf8_tum(vm, vd, vs2, vl); } -vint8mf4_t test_vncvt_x_x_w_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t vl) { +vint8mf4_t test_vncvt_x_x_w_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf4_tum(vm, vd, vs2, vl); } -vint8mf2_t test_vncvt_x_x_w_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t vl) { +vint8mf2_t test_vncvt_x_x_w_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf2_tum(vm, vd, vs2, vl); } -vint8m1_t test_vncvt_x_x_w_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t vl) { +vint8m1_t test_vncvt_x_x_w_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m1_tum(vm, vd, vs2, vl); } -vint8m2_t test_vncvt_x_x_w_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t vl) { +vint8m2_t test_vncvt_x_x_w_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m2_tum(vm, vd, vs2, vl); } -vint8m4_t test_vncvt_x_x_w_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t vl) { +vint8m4_t test_vncvt_x_x_w_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m4_tum(vm, vd, vs2, vl); } -vuint8mf8_t test_vncvt_x_x_w_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vncvt_x_x_w_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf8_tum(vm, vd, vs2, vl); } -vuint8mf4_t test_vncvt_x_x_w_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vncvt_x_x_w_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf4_tum(vm, vd, vs2, vl); } -vuint8mf2_t test_vncvt_x_x_w_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t vl) { +vuint8mf2_t test_vncvt_x_x_w_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf2_tum(vm, vd, vs2, vl); } -vuint8m1_t test_vncvt_x_x_w_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t vl) { +vuint8m1_t test_vncvt_x_x_w_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8m1_tum(vm, vd, vs2, vl); } -vuint8m2_t test_vncvt_x_x_w_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t vl) { +vuint8m2_t test_vncvt_x_x_w_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8m2_tum(vm, vd, vs2, vl); } -vuint8m4_t test_vncvt_x_x_w_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t vl) { +vuint8m4_t test_vncvt_x_x_w_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8m4_tum(vm, vd, vs2, vl); } -vint16mf4_t test_vncvt_x_x_w_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vint16mf4_t test_vncvt_x_x_w_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16mf4_tum(vm, vd, vs2, vl); } -vint16mf2_t test_vncvt_x_x_w_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t vl) { +vint16mf2_t test_vncvt_x_x_w_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16mf2_tum(vm, vd, vs2, vl); } -vint16m1_t test_vncvt_x_x_w_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t vl) { +vint16m1_t test_vncvt_x_x_w_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16m1_tum(vm, vd, vs2, vl); } -vint16m2_t test_vncvt_x_x_w_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t vl) { +vint16m2_t test_vncvt_x_x_w_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16m2_tum(vm, vd, vs2, vl); } -vint16m4_t test_vncvt_x_x_w_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t vl) { +vint16m4_t test_vncvt_x_x_w_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16m4_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vncvt_x_x_w_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vncvt_x_x_w_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vncvt_x_x_w_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vuint16mf2_t test_vncvt_x_x_w_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vncvt_x_x_w_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t vl) { +vuint16m1_t test_vncvt_x_x_w_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vncvt_x_x_w_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t vl) { +vuint16m2_t test_vncvt_x_x_w_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vncvt_x_x_w_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t vl) { +vuint16m4_t test_vncvt_x_x_w_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m4_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vncvt_x_x_w_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t vl) { +vint32mf2_t test_vncvt_x_x_w_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vncvt_x_x_w_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t vl) { +vint32m1_t test_vncvt_x_x_w_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vncvt_x_x_w_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t vl) { +vint32m2_t test_vncvt_x_x_w_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vncvt_x_x_w_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t vl) { +vint32m4_t test_vncvt_x_x_w_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m4_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vncvt_x_x_w_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vuint32mf2_t test_vncvt_x_x_w_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vncvt_x_x_w_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t vl) { +vuint32m1_t test_vncvt_x_x_w_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vncvt_x_x_w_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t vl) { +vuint32m2_t test_vncvt_x_x_w_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vncvt_x_x_w_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t vl) { +vuint32m4_t test_vncvt_x_x_w_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m4_tum(vm, vd, vs2, vl); } -vint8mf8_t test_vncvt_x_x_w_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t vl) { +vint8mf8_t test_vncvt_x_x_w_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf8_tumu(vm, vd, vs2, vl); } -vint8mf4_t test_vncvt_x_x_w_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t vl) { +vint8mf4_t test_vncvt_x_x_w_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf4_tumu(vm, vd, vs2, vl); } -vint8mf2_t test_vncvt_x_x_w_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t vl) { +vint8mf2_t test_vncvt_x_x_w_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf2_tumu(vm, vd, vs2, vl); } -vint8m1_t test_vncvt_x_x_w_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t vl) { +vint8m1_t test_vncvt_x_x_w_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m1_tumu(vm, vd, vs2, vl); } -vint8m2_t test_vncvt_x_x_w_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t vl) { +vint8m2_t test_vncvt_x_x_w_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m2_tumu(vm, vd, vs2, vl); } -vint8m4_t test_vncvt_x_x_w_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t vl) { +vint8m4_t test_vncvt_x_x_w_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m4_tumu(vm, vd, vs2, vl); } -vuint8mf8_t test_vncvt_x_x_w_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vncvt_x_x_w_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf8_tumu(vm, vd, vs2, vl); } -vuint8mf4_t test_vncvt_x_x_w_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vncvt_x_x_w_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf4_tumu(vm, vd, vs2, vl); } -vuint8mf2_t test_vncvt_x_x_w_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t vl) { +vuint8mf2_t test_vncvt_x_x_w_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf2_tumu(vm, vd, vs2, vl); } -vuint8m1_t test_vncvt_x_x_w_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t vl) { +vuint8m1_t test_vncvt_x_x_w_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8m1_tumu(vm, vd, vs2, vl); } -vuint8m2_t test_vncvt_x_x_w_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t vl) { +vuint8m2_t test_vncvt_x_x_w_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8m2_tumu(vm, vd, vs2, vl); } -vuint8m4_t test_vncvt_x_x_w_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t vl) { +vuint8m4_t test_vncvt_x_x_w_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint16m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8m4_tumu(vm, vd, vs2, vl); } -vint16mf4_t test_vncvt_x_x_w_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vint16mf4_t test_vncvt_x_x_w_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16mf4_tumu(vm, vd, vs2, vl); } -vint16mf2_t test_vncvt_x_x_w_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t vl) { +vint16mf2_t test_vncvt_x_x_w_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16mf2_tumu(vm, vd, vs2, vl); } -vint16m1_t test_vncvt_x_x_w_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t vl) { +vint16m1_t test_vncvt_x_x_w_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16m1_tumu(vm, vd, vs2, vl); } -vint16m2_t test_vncvt_x_x_w_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t vl) { +vint16m2_t test_vncvt_x_x_w_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16m2_tumu(vm, vd, vs2, vl); } -vint16m4_t test_vncvt_x_x_w_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t vl) { +vint16m4_t test_vncvt_x_x_w_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint32m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16m4_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vncvt_x_x_w_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vncvt_x_x_w_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vncvt_x_x_w_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vuint16mf2_t test_vncvt_x_x_w_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vncvt_x_x_w_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t vl) { +vuint16m1_t test_vncvt_x_x_w_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vncvt_x_x_w_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t vl) { +vuint16m2_t test_vncvt_x_x_w_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vncvt_x_x_w_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t vl) { +vuint16m4_t test_vncvt_x_x_w_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m4_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vncvt_x_x_w_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t vl) { +vint32mf2_t test_vncvt_x_x_w_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vncvt_x_x_w_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t vl) { +vint32m1_t test_vncvt_x_x_w_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vncvt_x_x_w_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t vl) { +vint32m2_t test_vncvt_x_x_w_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vncvt_x_x_w_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t vl) { +vint32m4_t test_vncvt_x_x_w_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint64m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m4_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vncvt_x_x_w_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vuint32mf2_t test_vncvt_x_x_w_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vncvt_x_x_w_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t vl) { +vuint32m1_t test_vncvt_x_x_w_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vncvt_x_x_w_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t vl) { +vuint32m2_t test_vncvt_x_x_w_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vncvt_x_x_w_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t vl) { +vuint32m4_t test_vncvt_x_x_w_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m4_tumu(vm, vd, vs2, vl); } -vint8mf8_t test_vncvt_x_x_w_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t vl) { +vint8mf8_t test_vncvt_x_x_w_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf8_mu(vm, vd, vs2, vl); } -vint8mf4_t test_vncvt_x_x_w_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t vl) { +vint8mf4_t test_vncvt_x_x_w_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf4_mu(vm, vd, vs2, vl); } -vint8mf2_t test_vncvt_x_x_w_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t vl) { +vint8mf2_t test_vncvt_x_x_w_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i8mf2_mu(vm, vd, vs2, vl); } -vint8m1_t test_vncvt_x_x_w_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t vl) { +vint8m1_t test_vncvt_x_x_w_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m1_mu(vm, vd, vs2, vl); } -vint8m2_t test_vncvt_x_x_w_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t vl) { +vint8m2_t test_vncvt_x_x_w_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m2_mu(vm, vd, vs2, vl); } -vint8m4_t test_vncvt_x_x_w_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t vl) { +vint8m4_t test_vncvt_x_x_w_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i8m4_mu(vm, vd, vs2, vl); } -vuint8mf8_t test_vncvt_x_x_w_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t vl) { +vuint8mf8_t test_vncvt_x_x_w_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf8_mu(vm, vd, vs2, vl); } -vuint8mf4_t test_vncvt_x_x_w_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t vl) { +vuint8mf4_t test_vncvt_x_x_w_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf4_mu(vm, vd, vs2, vl); } -vuint8mf2_t test_vncvt_x_x_w_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t vl) { +vuint8mf2_t test_vncvt_x_x_w_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u8mf2_mu(vm, vd, vs2, vl); } -vuint8m1_t test_vncvt_x_x_w_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t vl) { +vuint8m1_t test_vncvt_x_x_w_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u8m1_mu(vm, vd, vs2, vl); } -vuint8m2_t test_vncvt_x_x_w_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t vl) { +vuint8m2_t test_vncvt_x_x_w_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u8m2_mu(vm, vd, vs2, vl); } -vuint8m4_t test_vncvt_x_x_w_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t vl) { +vuint8m4_t test_vncvt_x_x_w_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_u8m4_mu(vm, vd, vs2, vl); } -vint16mf4_t test_vncvt_x_x_w_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t vl) { +vint16mf4_t test_vncvt_x_x_w_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16mf4_mu(vm, vd, vs2, vl); } -vint16mf2_t test_vncvt_x_x_w_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t vl) { +vint16mf2_t test_vncvt_x_x_w_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16mf2_mu(vm, vd, vs2, vl); } -vint16m1_t test_vncvt_x_x_w_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t vl) { +vint16m1_t test_vncvt_x_x_w_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i16m1_mu(vm, vd, vs2, vl); } -vint16m2_t test_vncvt_x_x_w_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t vl) { +vint16m2_t test_vncvt_x_x_w_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i16m2_mu(vm, vd, vs2, vl); } -vint16m4_t test_vncvt_x_x_w_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t vl) { +vint16m4_t test_vncvt_x_x_w_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i16m4_mu(vm, vd, vs2, vl); } -vuint16mf4_t test_vncvt_x_x_w_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t vl) { +vuint16mf4_t test_vncvt_x_x_w_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vncvt_x_x_w_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t vl) { +vuint16mf2_t test_vncvt_x_x_w_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vncvt_x_x_w_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t vl) { +vuint16m1_t test_vncvt_x_x_w_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vncvt_x_x_w_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t vl) { +vuint16m2_t test_vncvt_x_x_w_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vncvt_x_x_w_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t vl) { +vuint16m4_t test_vncvt_x_x_w_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u16m4_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vncvt_x_x_w_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t vl) { +vint32mf2_t test_vncvt_x_x_w_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vncvt_x_x_w_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t vl) { +vint32m1_t test_vncvt_x_x_w_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint64m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vncvt_x_x_w_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t vl) { +vint32m2_t test_vncvt_x_x_w_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint64m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vncvt_x_x_w_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t vl) { +vint32m4_t test_vncvt_x_x_w_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + size_t vl) { return __riscv_vncvt_x_x_w_i32m4_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vncvt_x_x_w_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t vl) { +vuint32mf2_t test_vncvt_x_x_w_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vncvt_x_x_w_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t vl) { +vuint32m1_t test_vncvt_x_x_w_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vncvt_x_x_w_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t vl) { +vuint32m2_t test_vncvt_x_x_w_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vncvt_x_x_w_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t vl) { +vuint32m4_t test_vncvt_x_x_w_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, size_t vl) { return __riscv_vncvt_x_x_w_u32m4_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vneg.c b/auto-generated/policy_funcs/llvm-api-tests/vneg.c index 0f5824e56..a2c70eb7c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vneg.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vneg.c @@ -94,266 +94,332 @@ vint64m8_t test_vneg_v_i64m8_tu(vint64m8_t vd, vint64m8_t vs, size_t vl) { return __riscv_vneg_v_i64m8_tu(vd, vs, vl); } -vint8mf8_t test_vneg_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, size_t vl) { +vint8mf8_t test_vneg_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, + size_t vl) { return __riscv_vneg_v_i8mf8_tum(vm, vd, vs, vl); } -vint8mf4_t test_vneg_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, size_t vl) { +vint8mf4_t test_vneg_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, + size_t vl) { return __riscv_vneg_v_i8mf4_tum(vm, vd, vs, vl); } -vint8mf2_t test_vneg_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, size_t vl) { +vint8mf2_t test_vneg_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, + size_t vl) { return __riscv_vneg_v_i8mf2_tum(vm, vd, vs, vl); } -vint8m1_t test_vneg_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, size_t vl) { +vint8m1_t test_vneg_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, + size_t vl) { return __riscv_vneg_v_i8m1_tum(vm, vd, vs, vl); } -vint8m2_t test_vneg_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, size_t vl) { +vint8m2_t test_vneg_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, + size_t vl) { return __riscv_vneg_v_i8m2_tum(vm, vd, vs, vl); } -vint8m4_t test_vneg_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, size_t vl) { +vint8m4_t test_vneg_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, + size_t vl) { return __riscv_vneg_v_i8m4_tum(vm, vd, vs, vl); } -vint8m8_t test_vneg_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, size_t vl) { +vint8m8_t test_vneg_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, + size_t vl) { return __riscv_vneg_v_i8m8_tum(vm, vd, vs, vl); } -vint16mf4_t test_vneg_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, size_t vl) { +vint16mf4_t test_vneg_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, + size_t vl) { return __riscv_vneg_v_i16mf4_tum(vm, vd, vs, vl); } -vint16mf2_t test_vneg_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, size_t vl) { +vint16mf2_t test_vneg_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, + size_t vl) { return __riscv_vneg_v_i16mf2_tum(vm, vd, vs, vl); } -vint16m1_t test_vneg_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, size_t vl) { +vint16m1_t test_vneg_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, + size_t vl) { return __riscv_vneg_v_i16m1_tum(vm, vd, vs, vl); } -vint16m2_t test_vneg_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, size_t vl) { +vint16m2_t test_vneg_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, + size_t vl) { return __riscv_vneg_v_i16m2_tum(vm, vd, vs, vl); } -vint16m4_t test_vneg_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, size_t vl) { +vint16m4_t test_vneg_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, + size_t vl) { return __riscv_vneg_v_i16m4_tum(vm, vd, vs, vl); } -vint16m8_t test_vneg_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, size_t vl) { +vint16m8_t test_vneg_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, + size_t vl) { return __riscv_vneg_v_i16m8_tum(vm, vd, vs, vl); } -vint32mf2_t test_vneg_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, size_t vl) { +vint32mf2_t test_vneg_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, + size_t vl) { return __riscv_vneg_v_i32mf2_tum(vm, vd, vs, vl); } -vint32m1_t test_vneg_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, size_t vl) { +vint32m1_t test_vneg_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, + size_t vl) { return __riscv_vneg_v_i32m1_tum(vm, vd, vs, vl); } -vint32m2_t test_vneg_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, size_t vl) { +vint32m2_t test_vneg_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, + size_t vl) { return __riscv_vneg_v_i32m2_tum(vm, vd, vs, vl); } -vint32m4_t test_vneg_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, size_t vl) { +vint32m4_t test_vneg_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, + size_t vl) { return __riscv_vneg_v_i32m4_tum(vm, vd, vs, vl); } -vint32m8_t test_vneg_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, size_t vl) { +vint32m8_t test_vneg_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, + size_t vl) { return __riscv_vneg_v_i32m8_tum(vm, vd, vs, vl); } -vint64m1_t test_vneg_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, size_t vl) { +vint64m1_t test_vneg_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, + size_t vl) { return __riscv_vneg_v_i64m1_tum(vm, vd, vs, vl); } -vint64m2_t test_vneg_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, size_t vl) { +vint64m2_t test_vneg_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, + size_t vl) { return __riscv_vneg_v_i64m2_tum(vm, vd, vs, vl); } -vint64m4_t test_vneg_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, size_t vl) { +vint64m4_t test_vneg_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, + size_t vl) { return __riscv_vneg_v_i64m4_tum(vm, vd, vs, vl); } -vint64m8_t test_vneg_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, size_t vl) { +vint64m8_t test_vneg_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, + size_t vl) { return __riscv_vneg_v_i64m8_tum(vm, vd, vs, vl); } -vint8mf8_t test_vneg_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, size_t vl) { +vint8mf8_t test_vneg_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, + size_t vl) { return __riscv_vneg_v_i8mf8_tumu(vm, vd, vs, vl); } -vint8mf4_t test_vneg_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, size_t vl) { +vint8mf4_t test_vneg_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, + size_t vl) { return __riscv_vneg_v_i8mf4_tumu(vm, vd, vs, vl); } -vint8mf2_t test_vneg_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, size_t vl) { +vint8mf2_t test_vneg_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, + size_t vl) { return __riscv_vneg_v_i8mf2_tumu(vm, vd, vs, vl); } -vint8m1_t test_vneg_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, size_t vl) { +vint8m1_t test_vneg_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, + size_t vl) { return __riscv_vneg_v_i8m1_tumu(vm, vd, vs, vl); } -vint8m2_t test_vneg_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, size_t vl) { +vint8m2_t test_vneg_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, + size_t vl) { return __riscv_vneg_v_i8m2_tumu(vm, vd, vs, vl); } -vint8m4_t test_vneg_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, size_t vl) { +vint8m4_t test_vneg_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, + size_t vl) { return __riscv_vneg_v_i8m4_tumu(vm, vd, vs, vl); } -vint8m8_t test_vneg_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, size_t vl) { +vint8m8_t test_vneg_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, + size_t vl) { return __riscv_vneg_v_i8m8_tumu(vm, vd, vs, vl); } -vint16mf4_t test_vneg_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, size_t vl) { +vint16mf4_t test_vneg_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs, size_t vl) { return __riscv_vneg_v_i16mf4_tumu(vm, vd, vs, vl); } -vint16mf2_t test_vneg_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, size_t vl) { +vint16mf2_t test_vneg_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs, size_t vl) { return __riscv_vneg_v_i16mf2_tumu(vm, vd, vs, vl); } -vint16m1_t test_vneg_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, size_t vl) { +vint16m1_t test_vneg_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, + size_t vl) { return __riscv_vneg_v_i16m1_tumu(vm, vd, vs, vl); } -vint16m2_t test_vneg_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, size_t vl) { +vint16m2_t test_vneg_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, + size_t vl) { return __riscv_vneg_v_i16m2_tumu(vm, vd, vs, vl); } -vint16m4_t test_vneg_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, size_t vl) { +vint16m4_t test_vneg_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, + size_t vl) { return __riscv_vneg_v_i16m4_tumu(vm, vd, vs, vl); } -vint16m8_t test_vneg_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, size_t vl) { +vint16m8_t test_vneg_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, + size_t vl) { return __riscv_vneg_v_i16m8_tumu(vm, vd, vs, vl); } -vint32mf2_t test_vneg_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, size_t vl) { +vint32mf2_t test_vneg_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs, size_t vl) { return __riscv_vneg_v_i32mf2_tumu(vm, vd, vs, vl); } -vint32m1_t test_vneg_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, size_t vl) { +vint32m1_t test_vneg_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, + size_t vl) { return __riscv_vneg_v_i32m1_tumu(vm, vd, vs, vl); } -vint32m2_t test_vneg_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, size_t vl) { +vint32m2_t test_vneg_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, + size_t vl) { return __riscv_vneg_v_i32m2_tumu(vm, vd, vs, vl); } -vint32m4_t test_vneg_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, size_t vl) { +vint32m4_t test_vneg_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, + size_t vl) { return __riscv_vneg_v_i32m4_tumu(vm, vd, vs, vl); } -vint32m8_t test_vneg_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, size_t vl) { +vint32m8_t test_vneg_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, + size_t vl) { return __riscv_vneg_v_i32m8_tumu(vm, vd, vs, vl); } -vint64m1_t test_vneg_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, size_t vl) { +vint64m1_t test_vneg_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, + size_t vl) { return __riscv_vneg_v_i64m1_tumu(vm, vd, vs, vl); } -vint64m2_t test_vneg_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, size_t vl) { +vint64m2_t test_vneg_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, + size_t vl) { return __riscv_vneg_v_i64m2_tumu(vm, vd, vs, vl); } -vint64m4_t test_vneg_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, size_t vl) { +vint64m4_t test_vneg_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, + size_t vl) { return __riscv_vneg_v_i64m4_tumu(vm, vd, vs, vl); } -vint64m8_t test_vneg_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, size_t vl) { +vint64m8_t test_vneg_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, + size_t vl) { return __riscv_vneg_v_i64m8_tumu(vm, vd, vs, vl); } -vint8mf8_t test_vneg_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, size_t vl) { +vint8mf8_t test_vneg_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, + size_t vl) { return __riscv_vneg_v_i8mf8_mu(vm, vd, vs, vl); } -vint8mf4_t test_vneg_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, size_t vl) { +vint8mf4_t test_vneg_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, + size_t vl) { return __riscv_vneg_v_i8mf4_mu(vm, vd, vs, vl); } -vint8mf2_t test_vneg_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, size_t vl) { +vint8mf2_t test_vneg_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, + size_t vl) { return __riscv_vneg_v_i8mf2_mu(vm, vd, vs, vl); } -vint8m1_t test_vneg_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, size_t vl) { +vint8m1_t test_vneg_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, + size_t vl) { return __riscv_vneg_v_i8m1_mu(vm, vd, vs, vl); } -vint8m2_t test_vneg_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, size_t vl) { +vint8m2_t test_vneg_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, + size_t vl) { return __riscv_vneg_v_i8m2_mu(vm, vd, vs, vl); } -vint8m4_t test_vneg_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, size_t vl) { +vint8m4_t test_vneg_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, + size_t vl) { return __riscv_vneg_v_i8m4_mu(vm, vd, vs, vl); } -vint8m8_t test_vneg_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, size_t vl) { +vint8m8_t test_vneg_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, + size_t vl) { return __riscv_vneg_v_i8m8_mu(vm, vd, vs, vl); } -vint16mf4_t test_vneg_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, size_t vl) { +vint16mf4_t test_vneg_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, + size_t vl) { return __riscv_vneg_v_i16mf4_mu(vm, vd, vs, vl); } -vint16mf2_t test_vneg_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, size_t vl) { +vint16mf2_t test_vneg_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, + size_t vl) { return __riscv_vneg_v_i16mf2_mu(vm, vd, vs, vl); } -vint16m1_t test_vneg_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, size_t vl) { +vint16m1_t test_vneg_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, + size_t vl) { return __riscv_vneg_v_i16m1_mu(vm, vd, vs, vl); } -vint16m2_t test_vneg_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, size_t vl) { +vint16m2_t test_vneg_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, + size_t vl) { return __riscv_vneg_v_i16m2_mu(vm, vd, vs, vl); } -vint16m4_t test_vneg_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, size_t vl) { +vint16m4_t test_vneg_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, + size_t vl) { return __riscv_vneg_v_i16m4_mu(vm, vd, vs, vl); } -vint16m8_t test_vneg_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, size_t vl) { +vint16m8_t test_vneg_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, + size_t vl) { return __riscv_vneg_v_i16m8_mu(vm, vd, vs, vl); } -vint32mf2_t test_vneg_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, size_t vl) { +vint32mf2_t test_vneg_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, + size_t vl) { return __riscv_vneg_v_i32mf2_mu(vm, vd, vs, vl); } -vint32m1_t test_vneg_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, size_t vl) { +vint32m1_t test_vneg_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, + size_t vl) { return __riscv_vneg_v_i32m1_mu(vm, vd, vs, vl); } -vint32m2_t test_vneg_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, size_t vl) { +vint32m2_t test_vneg_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, + size_t vl) { return __riscv_vneg_v_i32m2_mu(vm, vd, vs, vl); } -vint32m4_t test_vneg_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, size_t vl) { +vint32m4_t test_vneg_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, + size_t vl) { return __riscv_vneg_v_i32m4_mu(vm, vd, vs, vl); } -vint32m8_t test_vneg_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, size_t vl) { +vint32m8_t test_vneg_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, + size_t vl) { return __riscv_vneg_v_i32m8_mu(vm, vd, vs, vl); } -vint64m1_t test_vneg_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, size_t vl) { +vint64m1_t test_vneg_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, + size_t vl) { return __riscv_vneg_v_i64m1_mu(vm, vd, vs, vl); } -vint64m2_t test_vneg_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, size_t vl) { +vint64m2_t test_vneg_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, + size_t vl) { return __riscv_vneg_v_i64m2_mu(vm, vd, vs, vl); } -vint64m4_t test_vneg_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, size_t vl) { +vint64m4_t test_vneg_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, + size_t vl) { return __riscv_vneg_v_i64m4_mu(vm, vd, vs, vl); } -vint64m8_t test_vneg_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, size_t vl) { +vint64m8_t test_vneg_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, + size_t vl) { return __riscv_vneg_v_i64m8_mu(vm, vd, vs, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vnmsac.c index 4303fe450..0efa87093 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnmsac.c @@ -6,1410 +6,1852 @@ #include -vint8mf8_t test_vnmsac_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsac_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf8_tu(vd, vs1, vs2, vl); } -vint8mf8_t test_vnmsac_vx_i8mf8_tu(vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsac_vx_i8mf8_tu(vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i8mf8_tu(vd, rs1, vs2, vl); } -vint8mf4_t test_vnmsac_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsac_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf4_tu(vd, vs1, vs2, vl); } -vint8mf4_t test_vnmsac_vx_i8mf4_tu(vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsac_vx_i8mf4_tu(vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i8mf4_tu(vd, rs1, vs2, vl); } -vint8mf2_t test_vnmsac_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsac_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf2_tu(vd, vs1, vs2, vl); } -vint8mf2_t test_vnmsac_vx_i8mf2_tu(vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsac_vx_i8mf2_tu(vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i8mf2_tu(vd, rs1, vs2, vl); } -vint8m1_t test_vnmsac_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsac_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i8m1_tu(vd, vs1, vs2, vl); } -vint8m1_t test_vnmsac_vx_i8m1_tu(vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsac_vx_i8m1_tu(vint8m1_t vd, int8_t rs1, vint8m1_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i8m1_tu(vd, rs1, vs2, vl); } -vint8m2_t test_vnmsac_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsac_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i8m2_tu(vd, vs1, vs2, vl); } -vint8m2_t test_vnmsac_vx_i8m2_tu(vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsac_vx_i8m2_tu(vint8m2_t vd, int8_t rs1, vint8m2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i8m2_tu(vd, rs1, vs2, vl); } -vint8m4_t test_vnmsac_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsac_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i8m4_tu(vd, vs1, vs2, vl); } -vint8m4_t test_vnmsac_vx_i8m4_tu(vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsac_vx_i8m4_tu(vint8m4_t vd, int8_t rs1, vint8m4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i8m4_tu(vd, rs1, vs2, vl); } -vint8m8_t test_vnmsac_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsac_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i8m8_tu(vd, vs1, vs2, vl); } -vint8m8_t test_vnmsac_vx_i8m8_tu(vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsac_vx_i8m8_tu(vint8m8_t vd, int8_t rs1, vint8m8_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i8m8_tu(vd, rs1, vs2, vl); } -vint16mf4_t test_vnmsac_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsac_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16mf4_tu(vd, vs1, vs2, vl); } -vint16mf4_t test_vnmsac_vx_i16mf4_tu(vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsac_vx_i16mf4_tu(vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16mf4_tu(vd, rs1, vs2, vl); } -vint16mf2_t test_vnmsac_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsac_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16mf2_tu(vd, vs1, vs2, vl); } -vint16mf2_t test_vnmsac_vx_i16mf2_tu(vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsac_vx_i16mf2_tu(vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16mf2_tu(vd, rs1, vs2, vl); } -vint16m1_t test_vnmsac_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsac_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m1_tu(vd, vs1, vs2, vl); } -vint16m1_t test_vnmsac_vx_i16m1_tu(vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsac_vx_i16m1_tu(vint16m1_t vd, int16_t rs1, vint16m1_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i16m1_tu(vd, rs1, vs2, vl); } -vint16m2_t test_vnmsac_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsac_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m2_tu(vd, vs1, vs2, vl); } -vint16m2_t test_vnmsac_vx_i16m2_tu(vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsac_vx_i16m2_tu(vint16m2_t vd, int16_t rs1, vint16m2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i16m2_tu(vd, rs1, vs2, vl); } -vint16m4_t test_vnmsac_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsac_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m4_tu(vd, vs1, vs2, vl); } -vint16m4_t test_vnmsac_vx_i16m4_tu(vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsac_vx_i16m4_tu(vint16m4_t vd, int16_t rs1, vint16m4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i16m4_tu(vd, rs1, vs2, vl); } -vint16m8_t test_vnmsac_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsac_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m8_tu(vd, vs1, vs2, vl); } -vint16m8_t test_vnmsac_vx_i16m8_tu(vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsac_vx_i16m8_tu(vint16m8_t vd, int16_t rs1, vint16m8_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i16m8_tu(vd, rs1, vs2, vl); } -vint32mf2_t test_vnmsac_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsac_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32mf2_tu(vd, vs1, vs2, vl); } -vint32mf2_t test_vnmsac_vx_i32mf2_tu(vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsac_vx_i32mf2_tu(vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32mf2_tu(vd, rs1, vs2, vl); } -vint32m1_t test_vnmsac_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsac_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m1_tu(vd, vs1, vs2, vl); } -vint32m1_t test_vnmsac_vx_i32m1_tu(vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsac_vx_i32m1_tu(vint32m1_t vd, int32_t rs1, vint32m1_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i32m1_tu(vd, rs1, vs2, vl); } -vint32m2_t test_vnmsac_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsac_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m2_tu(vd, vs1, vs2, vl); } -vint32m2_t test_vnmsac_vx_i32m2_tu(vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsac_vx_i32m2_tu(vint32m2_t vd, int32_t rs1, vint32m2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i32m2_tu(vd, rs1, vs2, vl); } -vint32m4_t test_vnmsac_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsac_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m4_tu(vd, vs1, vs2, vl); } -vint32m4_t test_vnmsac_vx_i32m4_tu(vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsac_vx_i32m4_tu(vint32m4_t vd, int32_t rs1, vint32m4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i32m4_tu(vd, rs1, vs2, vl); } -vint32m8_t test_vnmsac_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsac_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m8_tu(vd, vs1, vs2, vl); } -vint32m8_t test_vnmsac_vx_i32m8_tu(vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsac_vx_i32m8_tu(vint32m8_t vd, int32_t rs1, vint32m8_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i32m8_tu(vd, rs1, vs2, vl); } -vint64m1_t test_vnmsac_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsac_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m1_tu(vd, vs1, vs2, vl); } -vint64m1_t test_vnmsac_vx_i64m1_tu(vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsac_vx_i64m1_tu(vint64m1_t vd, int64_t rs1, vint64m1_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i64m1_tu(vd, rs1, vs2, vl); } -vint64m2_t test_vnmsac_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsac_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m2_tu(vd, vs1, vs2, vl); } -vint64m2_t test_vnmsac_vx_i64m2_tu(vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsac_vx_i64m2_tu(vint64m2_t vd, int64_t rs1, vint64m2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i64m2_tu(vd, rs1, vs2, vl); } -vint64m4_t test_vnmsac_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsac_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m4_tu(vd, vs1, vs2, vl); } -vint64m4_t test_vnmsac_vx_i64m4_tu(vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsac_vx_i64m4_tu(vint64m4_t vd, int64_t rs1, vint64m4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i64m4_tu(vd, rs1, vs2, vl); } -vint64m8_t test_vnmsac_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsac_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m8_tu(vd, vs1, vs2, vl); } -vint64m8_t test_vnmsac_vx_i64m8_tu(vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsac_vx_i64m8_tu(vint64m8_t vd, int64_t rs1, vint64m8_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i64m8_tu(vd, rs1, vs2, vl); } -vuint8mf8_t test_vnmsac_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsac_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8mf8_tu(vd, vs1, vs2, vl); } -vuint8mf8_t test_vnmsac_vx_u8mf8_tu(vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsac_vx_u8mf8_tu(vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf8_tu(vd, rs1, vs2, vl); } -vuint8mf4_t test_vnmsac_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsac_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8mf4_tu(vd, vs1, vs2, vl); } -vuint8mf4_t test_vnmsac_vx_u8mf4_tu(vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsac_vx_u8mf4_tu(vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf4_tu(vd, rs1, vs2, vl); } -vuint8mf2_t test_vnmsac_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsac_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8mf2_tu(vd, vs1, vs2, vl); } -vuint8mf2_t test_vnmsac_vx_u8mf2_tu(vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsac_vx_u8mf2_tu(vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf2_tu(vd, rs1, vs2, vl); } -vuint8m1_t test_vnmsac_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsac_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8m1_tu(vd, vs1, vs2, vl); } -vuint8m1_t test_vnmsac_vx_u8m1_tu(vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsac_vx_u8m1_tu(vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u8m1_tu(vd, rs1, vs2, vl); } -vuint8m2_t test_vnmsac_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsac_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8m2_tu(vd, vs1, vs2, vl); } -vuint8m2_t test_vnmsac_vx_u8m2_tu(vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsac_vx_u8m2_tu(vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u8m2_tu(vd, rs1, vs2, vl); } -vuint8m4_t test_vnmsac_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsac_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8m4_tu(vd, vs1, vs2, vl); } -vuint8m4_t test_vnmsac_vx_u8m4_tu(vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsac_vx_u8m4_tu(vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u8m4_tu(vd, rs1, vs2, vl); } -vuint8m8_t test_vnmsac_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsac_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8m8_tu(vd, vs1, vs2, vl); } -vuint8m8_t test_vnmsac_vx_u8m8_tu(vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsac_vx_u8m8_tu(vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u8m8_tu(vd, rs1, vs2, vl); } -vuint16mf4_t test_vnmsac_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsac_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vnmsac_vv_u16mf4_tu(vd, vs1, vs2, vl); } -vuint16mf4_t test_vnmsac_vx_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsac_vx_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16mf4_tu(vd, rs1, vs2, vl); } -vuint16mf2_t test_vnmsac_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsac_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u16mf2_tu(vd, vs1, vs2, vl); } -vuint16mf2_t test_vnmsac_vx_u16mf2_tu(vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsac_vx_u16mf2_tu(vuint16mf2_t vd, uint16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16mf2_tu(vd, rs1, vs2, vl); } -vuint16m1_t test_vnmsac_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsac_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_u16m1_tu(vd, vs1, vs2, vl); } -vuint16m1_t test_vnmsac_vx_u16m1_tu(vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsac_vx_u16m1_tu(vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m1_tu(vd, rs1, vs2, vl); } -vuint16m2_t test_vnmsac_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsac_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u16m2_tu(vd, vs1, vs2, vl); } -vuint16m2_t test_vnmsac_vx_u16m2_tu(vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsac_vx_u16m2_tu(vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m2_tu(vd, rs1, vs2, vl); } -vuint16m4_t test_vnmsac_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsac_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_u16m4_tu(vd, vs1, vs2, vl); } -vuint16m4_t test_vnmsac_vx_u16m4_tu(vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsac_vx_u16m4_tu(vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m4_tu(vd, rs1, vs2, vl); } -vuint16m8_t test_vnmsac_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsac_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_u16m8_tu(vd, vs1, vs2, vl); } -vuint16m8_t test_vnmsac_vx_u16m8_tu(vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsac_vx_u16m8_tu(vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m8_tu(vd, rs1, vs2, vl); } -vuint32mf2_t test_vnmsac_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsac_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u32mf2_tu(vd, vs1, vs2, vl); } -vuint32mf2_t test_vnmsac_vx_u32mf2_tu(vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsac_vx_u32mf2_tu(vuint32mf2_t vd, uint32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32mf2_tu(vd, rs1, vs2, vl); } -vuint32m1_t test_vnmsac_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsac_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_u32m1_tu(vd, vs1, vs2, vl); } -vuint32m1_t test_vnmsac_vx_u32m1_tu(vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsac_vx_u32m1_tu(vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m1_tu(vd, rs1, vs2, vl); } -vuint32m2_t test_vnmsac_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsac_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u32m2_tu(vd, vs1, vs2, vl); } -vuint32m2_t test_vnmsac_vx_u32m2_tu(vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsac_vx_u32m2_tu(vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m2_tu(vd, rs1, vs2, vl); } -vuint32m4_t test_vnmsac_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsac_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_u32m4_tu(vd, vs1, vs2, vl); } -vuint32m4_t test_vnmsac_vx_u32m4_tu(vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsac_vx_u32m4_tu(vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m4_tu(vd, rs1, vs2, vl); } -vuint32m8_t test_vnmsac_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsac_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_u32m8_tu(vd, vs1, vs2, vl); } -vuint32m8_t test_vnmsac_vx_u32m8_tu(vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsac_vx_u32m8_tu(vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m8_tu(vd, rs1, vs2, vl); } -vuint64m1_t test_vnmsac_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsac_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_u64m1_tu(vd, vs1, vs2, vl); } -vuint64m1_t test_vnmsac_vx_u64m1_tu(vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsac_vx_u64m1_tu(vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m1_tu(vd, rs1, vs2, vl); } -vuint64m2_t test_vnmsac_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsac_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u64m2_tu(vd, vs1, vs2, vl); } -vuint64m2_t test_vnmsac_vx_u64m2_tu(vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsac_vx_u64m2_tu(vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m2_tu(vd, rs1, vs2, vl); } -vuint64m4_t test_vnmsac_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsac_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_u64m4_tu(vd, vs1, vs2, vl); } -vuint64m4_t test_vnmsac_vx_u64m4_tu(vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsac_vx_u64m4_tu(vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m4_tu(vd, rs1, vs2, vl); } -vuint64m8_t test_vnmsac_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsac_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_u64m8_tu(vd, vs1, vs2, vl); } -vuint64m8_t test_vnmsac_vx_u64m8_tu(vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsac_vx_u64m8_tu(vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m8_tu(vd, rs1, vs2, vl); } -vint8mf8_t test_vnmsac_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsac_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf8_tum(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vnmsac_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsac_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf8_tum(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vnmsac_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsac_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf4_tum(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vnmsac_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsac_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf4_tum(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vnmsac_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsac_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf2_tum(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vnmsac_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsac_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf2_tum(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vnmsac_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsac_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m1_tum(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vnmsac_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsac_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m1_tum(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vnmsac_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsac_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m2_tum(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vnmsac_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsac_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m2_tum(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vnmsac_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsac_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m4_tum(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vnmsac_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsac_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m4_tum(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vnmsac_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsac_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m8_tum(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vnmsac_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsac_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m8_tum(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vnmsac_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsac_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i16mf4_tum(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vnmsac_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsac_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16mf4_tum(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vnmsac_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsac_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i16mf2_tum(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vnmsac_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsac_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16mf2_tum(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vnmsac_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsac_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m1_tum(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vnmsac_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsac_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m1_tum(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vnmsac_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsac_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m2_tum(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vnmsac_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsac_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m2_tum(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vnmsac_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsac_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m4_tum(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vnmsac_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsac_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m4_tum(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vnmsac_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsac_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m8_tum(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vnmsac_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsac_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m8_tum(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vnmsac_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsac_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i32mf2_tum(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vnmsac_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsac_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32mf2_tum(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vnmsac_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsac_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m1_tum(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vnmsac_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsac_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m1_tum(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vnmsac_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsac_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m2_tum(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vnmsac_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsac_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m2_tum(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vnmsac_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsac_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m4_tum(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vnmsac_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsac_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m4_tum(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vnmsac_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsac_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m8_tum(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vnmsac_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsac_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m8_tum(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vnmsac_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsac_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m1_tum(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vnmsac_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsac_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m1_tum(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vnmsac_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsac_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m2_tum(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vnmsac_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsac_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m2_tum(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vnmsac_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsac_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m4_tum(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vnmsac_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsac_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m4_tum(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vnmsac_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsac_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m8_tum(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vnmsac_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsac_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m8_tum(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vnmsac_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsac_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf8_tum(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vnmsac_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsac_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf8_tum(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vnmsac_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsac_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf4_tum(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vnmsac_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsac_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf4_tum(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vnmsac_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsac_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf2_tum(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vnmsac_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsac_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf2_tum(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vnmsac_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsac_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m1_tum(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vnmsac_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsac_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m1_tum(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vnmsac_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsac_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m2_tum(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vnmsac_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsac_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m2_tum(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vnmsac_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsac_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m4_tum(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vnmsac_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsac_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m4_tum(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vnmsac_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsac_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m8_tum(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vnmsac_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsac_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m8_tum(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vnmsac_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsac_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16mf4_tum(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vnmsac_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsac_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u16mf4_tum(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vnmsac_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsac_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16mf2_tum(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vnmsac_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsac_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u16mf2_tum(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vnmsac_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsac_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m1_tum(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vnmsac_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsac_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m1_tum(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vnmsac_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsac_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m2_tum(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vnmsac_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsac_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m2_tum(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vnmsac_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsac_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m4_tum(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vnmsac_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsac_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m4_tum(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vnmsac_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsac_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m8_tum(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vnmsac_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsac_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m8_tum(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vnmsac_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsac_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32mf2_tum(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vnmsac_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsac_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u32mf2_tum(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vnmsac_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsac_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m1_tum(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vnmsac_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsac_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m1_tum(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vnmsac_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsac_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m2_tum(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vnmsac_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsac_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m2_tum(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vnmsac_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsac_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m4_tum(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vnmsac_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsac_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m4_tum(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vnmsac_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsac_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m8_tum(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vnmsac_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsac_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m8_tum(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vnmsac_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsac_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m1_tum(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vnmsac_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsac_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m1_tum(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vnmsac_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsac_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m2_tum(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vnmsac_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsac_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m2_tum(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vnmsac_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsac_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m4_tum(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vnmsac_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsac_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m4_tum(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vnmsac_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsac_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m8_tum(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vnmsac_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsac_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m8_tum(vm, vd, rs1, vs2, vl); } -vint8mf8_t test_vnmsac_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsac_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i8mf8_tumu(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vnmsac_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsac_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf8_tumu(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vnmsac_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsac_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i8mf4_tumu(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vnmsac_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsac_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf4_tumu(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vnmsac_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsac_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i8mf2_tumu(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vnmsac_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsac_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf2_tumu(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vnmsac_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsac_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m1_tumu(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vnmsac_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsac_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m1_tumu(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vnmsac_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsac_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m2_tumu(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vnmsac_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsac_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m2_tumu(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vnmsac_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsac_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m4_tumu(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vnmsac_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsac_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m4_tumu(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vnmsac_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsac_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m8_tumu(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vnmsac_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsac_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m8_tumu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vnmsac_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsac_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i16mf4_tumu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vnmsac_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsac_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + int16_t rs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i16mf4_tumu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vnmsac_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsac_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i16mf2_tumu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vnmsac_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsac_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + int16_t rs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i16mf2_tumu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vnmsac_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsac_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs1, vint16m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i16m1_tumu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vnmsac_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsac_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m1_tumu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vnmsac_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsac_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m2_tumu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vnmsac_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsac_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m2_tumu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vnmsac_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsac_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m4_tumu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vnmsac_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsac_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m4_tumu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vnmsac_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsac_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m8_tumu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vnmsac_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsac_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m8_tumu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vnmsac_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsac_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i32mf2_tumu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vnmsac_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsac_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + int32_t rs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_i32mf2_tumu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vnmsac_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsac_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs1, vint32m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i32m1_tumu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vnmsac_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsac_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m1_tumu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vnmsac_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsac_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs1, vint32m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i32m2_tumu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vnmsac_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsac_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m2_tumu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vnmsac_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsac_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m4_tumu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vnmsac_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsac_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m4_tumu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vnmsac_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsac_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m8_tumu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vnmsac_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsac_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m8_tumu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vnmsac_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsac_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs1, vint64m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i64m1_tumu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vnmsac_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsac_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m1_tumu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vnmsac_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsac_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs1, vint64m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i64m2_tumu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vnmsac_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsac_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m2_tumu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vnmsac_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsac_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs1, vint64m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i64m4_tumu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vnmsac_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsac_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m4_tumu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vnmsac_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsac_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m8_tumu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vnmsac_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsac_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m8_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vnmsac_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsac_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf8_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vnmsac_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsac_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf8_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vnmsac_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsac_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vnmsac_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsac_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vnmsac_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsac_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vnmsac_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsac_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vnmsac_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsac_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m1_tumu(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vnmsac_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsac_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m1_tumu(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vnmsac_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsac_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m2_tumu(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vnmsac_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsac_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m2_tumu(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vnmsac_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsac_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m4_tumu(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vnmsac_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsac_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m4_tumu(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vnmsac_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsac_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m8_tumu(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vnmsac_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsac_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m8_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vnmsac_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsac_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vnmsac_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsac_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u16mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vnmsac_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsac_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vnmsac_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsac_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u16mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vnmsac_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsac_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m1_tumu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vnmsac_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsac_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + uint16_t rs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u16m1_tumu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vnmsac_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsac_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m2_tumu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vnmsac_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsac_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vnmsac_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsac_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m4_tumu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vnmsac_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsac_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m4_tumu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vnmsac_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsac_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m8_tumu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vnmsac_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsac_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m8_tumu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vnmsac_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsac_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vnmsac_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsac_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u32mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vnmsac_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsac_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m1_tumu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vnmsac_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsac_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + uint32_t rs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u32m1_tumu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vnmsac_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsac_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m2_tumu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vnmsac_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsac_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + uint32_t rs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u32m2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vnmsac_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsac_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m4_tumu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vnmsac_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsac_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m4_tumu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vnmsac_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsac_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m8_tumu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vnmsac_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsac_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m8_tumu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vnmsac_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsac_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m1_tumu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vnmsac_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsac_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + uint64_t rs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u64m1_tumu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vnmsac_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsac_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m2_tumu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vnmsac_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsac_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + uint64_t rs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u64m2_tumu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vnmsac_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsac_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m4_tumu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vnmsac_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsac_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + uint64_t rs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u64m4_tumu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vnmsac_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsac_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m8_tumu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vnmsac_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsac_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m8_tumu(vm, vd, rs1, vs2, vl); } -vint8mf8_t test_vnmsac_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsac_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf8_mu(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vnmsac_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsac_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf8_mu(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vnmsac_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsac_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf4_mu(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vnmsac_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsac_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf4_mu(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vnmsac_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsac_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8mf2_mu(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vnmsac_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsac_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8mf2_mu(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vnmsac_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsac_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m1_mu(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vnmsac_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsac_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m1_mu(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vnmsac_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsac_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m2_mu(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vnmsac_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsac_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m2_mu(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vnmsac_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsac_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m4_mu(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vnmsac_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsac_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m4_mu(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vnmsac_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsac_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i8m8_mu(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vnmsac_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsac_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i8m8_mu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vnmsac_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsac_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i16mf4_mu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vnmsac_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsac_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16mf4_mu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vnmsac_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsac_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i16mf2_mu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vnmsac_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsac_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16mf2_mu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vnmsac_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsac_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m1_mu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vnmsac_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsac_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m1_mu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vnmsac_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsac_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m2_mu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vnmsac_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsac_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m2_mu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vnmsac_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsac_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m4_mu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vnmsac_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsac_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m4_mu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vnmsac_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsac_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i16m8_mu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vnmsac_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsac_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i16m8_mu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vnmsac_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsac_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_i32mf2_mu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vnmsac_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsac_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32mf2_mu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vnmsac_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsac_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m1_mu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vnmsac_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsac_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m1_mu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vnmsac_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsac_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m2_mu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vnmsac_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsac_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m2_mu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vnmsac_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsac_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m4_mu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vnmsac_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsac_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m4_mu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vnmsac_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsac_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i32m8_mu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vnmsac_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsac_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i32m8_mu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vnmsac_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsac_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m1_mu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vnmsac_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsac_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m1_mu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vnmsac_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsac_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m2_mu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vnmsac_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsac_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m2_mu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vnmsac_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsac_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m4_mu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vnmsac_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsac_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m4_mu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vnmsac_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsac_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_i64m8_mu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vnmsac_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsac_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_i64m8_mu(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vnmsac_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsac_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf8_mu(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vnmsac_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsac_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf8_mu(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vnmsac_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsac_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf4_mu(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vnmsac_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsac_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf4_mu(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vnmsac_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsac_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u8mf2_mu(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vnmsac_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsac_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8mf2_mu(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vnmsac_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsac_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m1_mu(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vnmsac_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsac_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m1_mu(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vnmsac_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsac_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m2_mu(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vnmsac_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsac_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m2_mu(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vnmsac_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsac_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m4_mu(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vnmsac_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsac_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m4_mu(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vnmsac_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsac_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vv_u8m8_mu(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vnmsac_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsac_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u8m8_mu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vnmsac_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsac_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16mf4_mu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vnmsac_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsac_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u16mf4_mu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vnmsac_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsac_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16mf2_mu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vnmsac_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsac_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u16mf2_mu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vnmsac_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsac_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m1_mu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vnmsac_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsac_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m1_mu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vnmsac_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsac_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m2_mu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vnmsac_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsac_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m2_mu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vnmsac_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsac_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m4_mu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vnmsac_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsac_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m4_mu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vnmsac_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsac_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u16m8_mu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vnmsac_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsac_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u16m8_mu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vnmsac_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsac_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32mf2_mu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vnmsac_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsac_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsac_vx_u32mf2_mu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vnmsac_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsac_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m1_mu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vnmsac_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsac_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m1_mu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vnmsac_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsac_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m2_mu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vnmsac_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsac_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m2_mu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vnmsac_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsac_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m4_mu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vnmsac_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsac_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m4_mu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vnmsac_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsac_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u32m8_mu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vnmsac_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsac_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u32m8_mu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vnmsac_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsac_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m1_mu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vnmsac_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsac_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m1_mu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vnmsac_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsac_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m2_mu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vnmsac_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsac_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m2_mu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vnmsac_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsac_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m4_mu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vnmsac_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsac_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m4_mu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vnmsac_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsac_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vnmsac_vv_u64m8_mu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vnmsac_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsac_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsac_vx_u64m8_mu(vm, vd, rs1, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnmsub.c b/auto-generated/policy_funcs/llvm-api-tests/vnmsub.c index 82a661dfd..c391395b2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnmsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnmsub.c @@ -6,1410 +6,1852 @@ #include -vint8mf8_t test_vnmsub_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsub_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf8_tu(vd, vs1, vs2, vl); } -vint8mf8_t test_vnmsub_vx_i8mf8_tu(vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsub_vx_i8mf8_tu(vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i8mf8_tu(vd, rs1, vs2, vl); } -vint8mf4_t test_vnmsub_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsub_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf4_tu(vd, vs1, vs2, vl); } -vint8mf4_t test_vnmsub_vx_i8mf4_tu(vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsub_vx_i8mf4_tu(vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i8mf4_tu(vd, rs1, vs2, vl); } -vint8mf2_t test_vnmsub_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsub_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf2_tu(vd, vs1, vs2, vl); } -vint8mf2_t test_vnmsub_vx_i8mf2_tu(vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsub_vx_i8mf2_tu(vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i8mf2_tu(vd, rs1, vs2, vl); } -vint8m1_t test_vnmsub_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsub_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i8m1_tu(vd, vs1, vs2, vl); } -vint8m1_t test_vnmsub_vx_i8m1_tu(vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsub_vx_i8m1_tu(vint8m1_t vd, int8_t rs1, vint8m1_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i8m1_tu(vd, rs1, vs2, vl); } -vint8m2_t test_vnmsub_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsub_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i8m2_tu(vd, vs1, vs2, vl); } -vint8m2_t test_vnmsub_vx_i8m2_tu(vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsub_vx_i8m2_tu(vint8m2_t vd, int8_t rs1, vint8m2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i8m2_tu(vd, rs1, vs2, vl); } -vint8m4_t test_vnmsub_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsub_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i8m4_tu(vd, vs1, vs2, vl); } -vint8m4_t test_vnmsub_vx_i8m4_tu(vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsub_vx_i8m4_tu(vint8m4_t vd, int8_t rs1, vint8m4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i8m4_tu(vd, rs1, vs2, vl); } -vint8m8_t test_vnmsub_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsub_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i8m8_tu(vd, vs1, vs2, vl); } -vint8m8_t test_vnmsub_vx_i8m8_tu(vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsub_vx_i8m8_tu(vint8m8_t vd, int8_t rs1, vint8m8_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i8m8_tu(vd, rs1, vs2, vl); } -vint16mf4_t test_vnmsub_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsub_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16mf4_tu(vd, vs1, vs2, vl); } -vint16mf4_t test_vnmsub_vx_i16mf4_tu(vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsub_vx_i16mf4_tu(vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16mf4_tu(vd, rs1, vs2, vl); } -vint16mf2_t test_vnmsub_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsub_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16mf2_tu(vd, vs1, vs2, vl); } -vint16mf2_t test_vnmsub_vx_i16mf2_tu(vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsub_vx_i16mf2_tu(vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16mf2_tu(vd, rs1, vs2, vl); } -vint16m1_t test_vnmsub_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsub_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m1_tu(vd, vs1, vs2, vl); } -vint16m1_t test_vnmsub_vx_i16m1_tu(vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsub_vx_i16m1_tu(vint16m1_t vd, int16_t rs1, vint16m1_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i16m1_tu(vd, rs1, vs2, vl); } -vint16m2_t test_vnmsub_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsub_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m2_tu(vd, vs1, vs2, vl); } -vint16m2_t test_vnmsub_vx_i16m2_tu(vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsub_vx_i16m2_tu(vint16m2_t vd, int16_t rs1, vint16m2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i16m2_tu(vd, rs1, vs2, vl); } -vint16m4_t test_vnmsub_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsub_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m4_tu(vd, vs1, vs2, vl); } -vint16m4_t test_vnmsub_vx_i16m4_tu(vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsub_vx_i16m4_tu(vint16m4_t vd, int16_t rs1, vint16m4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i16m4_tu(vd, rs1, vs2, vl); } -vint16m8_t test_vnmsub_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsub_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m8_tu(vd, vs1, vs2, vl); } -vint16m8_t test_vnmsub_vx_i16m8_tu(vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsub_vx_i16m8_tu(vint16m8_t vd, int16_t rs1, vint16m8_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i16m8_tu(vd, rs1, vs2, vl); } -vint32mf2_t test_vnmsub_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsub_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32mf2_tu(vd, vs1, vs2, vl); } -vint32mf2_t test_vnmsub_vx_i32mf2_tu(vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsub_vx_i32mf2_tu(vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32mf2_tu(vd, rs1, vs2, vl); } -vint32m1_t test_vnmsub_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsub_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m1_tu(vd, vs1, vs2, vl); } -vint32m1_t test_vnmsub_vx_i32m1_tu(vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsub_vx_i32m1_tu(vint32m1_t vd, int32_t rs1, vint32m1_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i32m1_tu(vd, rs1, vs2, vl); } -vint32m2_t test_vnmsub_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsub_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m2_tu(vd, vs1, vs2, vl); } -vint32m2_t test_vnmsub_vx_i32m2_tu(vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsub_vx_i32m2_tu(vint32m2_t vd, int32_t rs1, vint32m2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i32m2_tu(vd, rs1, vs2, vl); } -vint32m4_t test_vnmsub_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsub_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m4_tu(vd, vs1, vs2, vl); } -vint32m4_t test_vnmsub_vx_i32m4_tu(vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsub_vx_i32m4_tu(vint32m4_t vd, int32_t rs1, vint32m4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i32m4_tu(vd, rs1, vs2, vl); } -vint32m8_t test_vnmsub_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsub_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m8_tu(vd, vs1, vs2, vl); } -vint32m8_t test_vnmsub_vx_i32m8_tu(vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsub_vx_i32m8_tu(vint32m8_t vd, int32_t rs1, vint32m8_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i32m8_tu(vd, rs1, vs2, vl); } -vint64m1_t test_vnmsub_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsub_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m1_tu(vd, vs1, vs2, vl); } -vint64m1_t test_vnmsub_vx_i64m1_tu(vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsub_vx_i64m1_tu(vint64m1_t vd, int64_t rs1, vint64m1_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i64m1_tu(vd, rs1, vs2, vl); } -vint64m2_t test_vnmsub_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsub_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m2_tu(vd, vs1, vs2, vl); } -vint64m2_t test_vnmsub_vx_i64m2_tu(vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsub_vx_i64m2_tu(vint64m2_t vd, int64_t rs1, vint64m2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i64m2_tu(vd, rs1, vs2, vl); } -vint64m4_t test_vnmsub_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsub_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m4_tu(vd, vs1, vs2, vl); } -vint64m4_t test_vnmsub_vx_i64m4_tu(vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsub_vx_i64m4_tu(vint64m4_t vd, int64_t rs1, vint64m4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i64m4_tu(vd, rs1, vs2, vl); } -vint64m8_t test_vnmsub_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsub_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m8_tu(vd, vs1, vs2, vl); } -vint64m8_t test_vnmsub_vx_i64m8_tu(vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsub_vx_i64m8_tu(vint64m8_t vd, int64_t rs1, vint64m8_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i64m8_tu(vd, rs1, vs2, vl); } -vuint8mf8_t test_vnmsub_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsub_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8mf8_tu(vd, vs1, vs2, vl); } -vuint8mf8_t test_vnmsub_vx_u8mf8_tu(vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsub_vx_u8mf8_tu(vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf8_tu(vd, rs1, vs2, vl); } -vuint8mf4_t test_vnmsub_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsub_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8mf4_tu(vd, vs1, vs2, vl); } -vuint8mf4_t test_vnmsub_vx_u8mf4_tu(vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsub_vx_u8mf4_tu(vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf4_tu(vd, rs1, vs2, vl); } -vuint8mf2_t test_vnmsub_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsub_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8mf2_tu(vd, vs1, vs2, vl); } -vuint8mf2_t test_vnmsub_vx_u8mf2_tu(vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsub_vx_u8mf2_tu(vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf2_tu(vd, rs1, vs2, vl); } -vuint8m1_t test_vnmsub_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsub_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8m1_tu(vd, vs1, vs2, vl); } -vuint8m1_t test_vnmsub_vx_u8m1_tu(vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsub_vx_u8m1_tu(vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u8m1_tu(vd, rs1, vs2, vl); } -vuint8m2_t test_vnmsub_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsub_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8m2_tu(vd, vs1, vs2, vl); } -vuint8m2_t test_vnmsub_vx_u8m2_tu(vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsub_vx_u8m2_tu(vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u8m2_tu(vd, rs1, vs2, vl); } -vuint8m4_t test_vnmsub_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsub_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8m4_tu(vd, vs1, vs2, vl); } -vuint8m4_t test_vnmsub_vx_u8m4_tu(vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsub_vx_u8m4_tu(vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u8m4_tu(vd, rs1, vs2, vl); } -vuint8m8_t test_vnmsub_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsub_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8m8_tu(vd, vs1, vs2, vl); } -vuint8m8_t test_vnmsub_vx_u8m8_tu(vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsub_vx_u8m8_tu(vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u8m8_tu(vd, rs1, vs2, vl); } -vuint16mf4_t test_vnmsub_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsub_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vnmsub_vv_u16mf4_tu(vd, vs1, vs2, vl); } -vuint16mf4_t test_vnmsub_vx_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsub_vx_u16mf4_tu(vuint16mf4_t vd, uint16_t rs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16mf4_tu(vd, rs1, vs2, vl); } -vuint16mf2_t test_vnmsub_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsub_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u16mf2_tu(vd, vs1, vs2, vl); } -vuint16mf2_t test_vnmsub_vx_u16mf2_tu(vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsub_vx_u16mf2_tu(vuint16mf2_t vd, uint16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16mf2_tu(vd, rs1, vs2, vl); } -vuint16m1_t test_vnmsub_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsub_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_u16m1_tu(vd, vs1, vs2, vl); } -vuint16m1_t test_vnmsub_vx_u16m1_tu(vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsub_vx_u16m1_tu(vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m1_tu(vd, rs1, vs2, vl); } -vuint16m2_t test_vnmsub_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsub_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u16m2_tu(vd, vs1, vs2, vl); } -vuint16m2_t test_vnmsub_vx_u16m2_tu(vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsub_vx_u16m2_tu(vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m2_tu(vd, rs1, vs2, vl); } -vuint16m4_t test_vnmsub_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsub_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_u16m4_tu(vd, vs1, vs2, vl); } -vuint16m4_t test_vnmsub_vx_u16m4_tu(vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsub_vx_u16m4_tu(vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m4_tu(vd, rs1, vs2, vl); } -vuint16m8_t test_vnmsub_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsub_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_u16m8_tu(vd, vs1, vs2, vl); } -vuint16m8_t test_vnmsub_vx_u16m8_tu(vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsub_vx_u16m8_tu(vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m8_tu(vd, rs1, vs2, vl); } -vuint32mf2_t test_vnmsub_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsub_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u32mf2_tu(vd, vs1, vs2, vl); } -vuint32mf2_t test_vnmsub_vx_u32mf2_tu(vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsub_vx_u32mf2_tu(vuint32mf2_t vd, uint32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32mf2_tu(vd, rs1, vs2, vl); } -vuint32m1_t test_vnmsub_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsub_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_u32m1_tu(vd, vs1, vs2, vl); } -vuint32m1_t test_vnmsub_vx_u32m1_tu(vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsub_vx_u32m1_tu(vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m1_tu(vd, rs1, vs2, vl); } -vuint32m2_t test_vnmsub_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsub_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u32m2_tu(vd, vs1, vs2, vl); } -vuint32m2_t test_vnmsub_vx_u32m2_tu(vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsub_vx_u32m2_tu(vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m2_tu(vd, rs1, vs2, vl); } -vuint32m4_t test_vnmsub_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsub_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_u32m4_tu(vd, vs1, vs2, vl); } -vuint32m4_t test_vnmsub_vx_u32m4_tu(vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsub_vx_u32m4_tu(vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m4_tu(vd, rs1, vs2, vl); } -vuint32m8_t test_vnmsub_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsub_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_u32m8_tu(vd, vs1, vs2, vl); } -vuint32m8_t test_vnmsub_vx_u32m8_tu(vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsub_vx_u32m8_tu(vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m8_tu(vd, rs1, vs2, vl); } -vuint64m1_t test_vnmsub_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsub_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_u64m1_tu(vd, vs1, vs2, vl); } -vuint64m1_t test_vnmsub_vx_u64m1_tu(vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsub_vx_u64m1_tu(vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m1_tu(vd, rs1, vs2, vl); } -vuint64m2_t test_vnmsub_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsub_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u64m2_tu(vd, vs1, vs2, vl); } -vuint64m2_t test_vnmsub_vx_u64m2_tu(vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsub_vx_u64m2_tu(vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m2_tu(vd, rs1, vs2, vl); } -vuint64m4_t test_vnmsub_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsub_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_u64m4_tu(vd, vs1, vs2, vl); } -vuint64m4_t test_vnmsub_vx_u64m4_tu(vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsub_vx_u64m4_tu(vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m4_tu(vd, rs1, vs2, vl); } -vuint64m8_t test_vnmsub_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsub_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_u64m8_tu(vd, vs1, vs2, vl); } -vuint64m8_t test_vnmsub_vx_u64m8_tu(vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsub_vx_u64m8_tu(vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m8_tu(vd, rs1, vs2, vl); } -vint8mf8_t test_vnmsub_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsub_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf8_tum(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vnmsub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf8_tum(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vnmsub_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsub_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf4_tum(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vnmsub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf4_tum(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vnmsub_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsub_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf2_tum(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vnmsub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf2_tum(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vnmsub_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsub_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m1_tum(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vnmsub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m1_tum(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vnmsub_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsub_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m2_tum(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vnmsub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m2_tum(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vnmsub_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsub_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m4_tum(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vnmsub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m4_tum(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vnmsub_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsub_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m8_tum(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vnmsub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m8_tum(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vnmsub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i16mf4_tum(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vnmsub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16mf4_tum(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vnmsub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i16mf2_tum(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vnmsub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16mf2_tum(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vnmsub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m1_tum(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vnmsub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m1_tum(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vnmsub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m2_tum(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vnmsub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m2_tum(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vnmsub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m4_tum(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vnmsub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m4_tum(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vnmsub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m8_tum(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vnmsub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m8_tum(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vnmsub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i32mf2_tum(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vnmsub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32mf2_tum(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vnmsub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m1_tum(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vnmsub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m1_tum(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vnmsub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m2_tum(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vnmsub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m2_tum(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vnmsub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m4_tum(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vnmsub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m4_tum(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vnmsub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m8_tum(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vnmsub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m8_tum(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vnmsub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m1_tum(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vnmsub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m1_tum(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vnmsub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m2_tum(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vnmsub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m2_tum(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vnmsub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m4_tum(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vnmsub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m4_tum(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vnmsub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m8_tum(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vnmsub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m8_tum(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vnmsub_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsub_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf8_tum(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vnmsub_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsub_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf8_tum(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vnmsub_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsub_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf4_tum(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vnmsub_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsub_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf4_tum(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vnmsub_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsub_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf2_tum(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vnmsub_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsub_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf2_tum(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vnmsub_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsub_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m1_tum(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vnmsub_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsub_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m1_tum(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vnmsub_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsub_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m2_tum(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vnmsub_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsub_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m2_tum(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vnmsub_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsub_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m4_tum(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vnmsub_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsub_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m4_tum(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vnmsub_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsub_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m8_tum(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vnmsub_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsub_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m8_tum(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vnmsub_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsub_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16mf4_tum(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vnmsub_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsub_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u16mf4_tum(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vnmsub_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsub_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16mf2_tum(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vnmsub_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsub_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u16mf2_tum(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vnmsub_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsub_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m1_tum(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vnmsub_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsub_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m1_tum(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vnmsub_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsub_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m2_tum(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vnmsub_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsub_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m2_tum(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vnmsub_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsub_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m4_tum(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vnmsub_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsub_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m4_tum(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vnmsub_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsub_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m8_tum(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vnmsub_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsub_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m8_tum(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vnmsub_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsub_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32mf2_tum(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vnmsub_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsub_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u32mf2_tum(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vnmsub_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsub_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m1_tum(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vnmsub_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsub_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m1_tum(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vnmsub_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsub_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m2_tum(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vnmsub_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsub_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m2_tum(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vnmsub_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsub_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m4_tum(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vnmsub_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsub_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m4_tum(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vnmsub_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsub_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m8_tum(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vnmsub_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsub_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m8_tum(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vnmsub_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsub_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m1_tum(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vnmsub_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsub_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m1_tum(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vnmsub_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsub_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m2_tum(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vnmsub_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsub_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m2_tum(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vnmsub_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsub_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m4_tum(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vnmsub_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsub_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m4_tum(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vnmsub_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsub_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m8_tum(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vnmsub_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsub_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m8_tum(vm, vd, rs1, vs2, vl); } -vint8mf8_t test_vnmsub_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsub_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i8mf8_tumu(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vnmsub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf8_tumu(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vnmsub_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsub_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i8mf4_tumu(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vnmsub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf4_tumu(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vnmsub_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsub_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i8mf2_tumu(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vnmsub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf2_tumu(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vnmsub_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsub_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m1_tumu(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vnmsub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m1_tumu(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vnmsub_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsub_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m2_tumu(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vnmsub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m2_tumu(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vnmsub_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsub_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m4_tumu(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vnmsub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m4_tumu(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vnmsub_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsub_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m8_tumu(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vnmsub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m8_tumu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vnmsub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i16mf4_tumu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vnmsub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + int16_t rs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i16mf4_tumu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vnmsub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i16mf2_tumu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vnmsub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + int16_t rs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i16mf2_tumu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vnmsub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs1, vint16m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i16m1_tumu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vnmsub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m1_tumu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vnmsub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m2_tumu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vnmsub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m2_tumu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vnmsub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m4_tumu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vnmsub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m4_tumu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vnmsub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m8_tumu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vnmsub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m8_tumu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vnmsub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i32mf2_tumu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vnmsub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + int32_t rs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_i32mf2_tumu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vnmsub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs1, vint32m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i32m1_tumu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vnmsub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m1_tumu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vnmsub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs1, vint32m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i32m2_tumu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vnmsub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m2_tumu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vnmsub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m4_tumu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vnmsub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m4_tumu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vnmsub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m8_tumu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vnmsub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m8_tumu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vnmsub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs1, vint64m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i64m1_tumu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vnmsub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m1_tumu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vnmsub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs1, vint64m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i64m2_tumu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vnmsub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m2_tumu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vnmsub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs1, vint64m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i64m4_tumu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vnmsub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m4_tumu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vnmsub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m8_tumu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vnmsub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m8_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vnmsub_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsub_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf8_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vnmsub_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsub_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf8_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vnmsub_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsub_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vnmsub_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsub_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vnmsub_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsub_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vnmsub_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsub_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vnmsub_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsub_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m1_tumu(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vnmsub_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsub_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m1_tumu(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vnmsub_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsub_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m2_tumu(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vnmsub_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsub_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m2_tumu(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vnmsub_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsub_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m4_tumu(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vnmsub_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsub_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m4_tumu(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vnmsub_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsub_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m8_tumu(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vnmsub_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsub_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m8_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vnmsub_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsub_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vnmsub_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsub_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u16mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vnmsub_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsub_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vnmsub_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsub_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u16mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vnmsub_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsub_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m1_tumu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vnmsub_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsub_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + uint16_t rs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u16m1_tumu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vnmsub_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsub_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m2_tumu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vnmsub_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsub_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vnmsub_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsub_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m4_tumu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vnmsub_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsub_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m4_tumu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vnmsub_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsub_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m8_tumu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vnmsub_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsub_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m8_tumu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vnmsub_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsub_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vnmsub_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsub_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u32mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vnmsub_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsub_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m1_tumu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vnmsub_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsub_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + uint32_t rs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u32m1_tumu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vnmsub_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsub_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m2_tumu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vnmsub_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsub_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + uint32_t rs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u32m2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vnmsub_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsub_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m4_tumu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vnmsub_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsub_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m4_tumu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vnmsub_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsub_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m8_tumu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vnmsub_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsub_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m8_tumu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vnmsub_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsub_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m1_tumu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vnmsub_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsub_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + uint64_t rs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u64m1_tumu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vnmsub_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsub_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m2_tumu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vnmsub_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsub_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + uint64_t rs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u64m2_tumu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vnmsub_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsub_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m4_tumu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vnmsub_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsub_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + uint64_t rs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u64m4_tumu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vnmsub_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsub_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m8_tumu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vnmsub_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsub_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m8_tumu(vm, vd, rs1, vs2, vl); } -vint8mf8_t test_vnmsub_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsub_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf8_mu(vm, vd, vs1, vs2, vl); } -vint8mf8_t test_vnmsub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint8mf8_t test_vnmsub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf8_mu(vm, vd, rs1, vs2, vl); } -vint8mf4_t test_vnmsub_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsub_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf4_mu(vm, vd, vs1, vs2, vl); } -vint8mf4_t test_vnmsub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint8mf4_t test_vnmsub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf4_mu(vm, vd, rs1, vs2, vl); } -vint8mf2_t test_vnmsub_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsub_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8mf2_mu(vm, vd, vs1, vs2, vl); } -vint8mf2_t test_vnmsub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint8mf2_t test_vnmsub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8mf2_mu(vm, vd, rs1, vs2, vl); } -vint8m1_t test_vnmsub_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsub_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m1_mu(vm, vd, vs1, vs2, vl); } -vint8m1_t test_vnmsub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint8m1_t test_vnmsub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m1_mu(vm, vd, rs1, vs2, vl); } -vint8m2_t test_vnmsub_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsub_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m2_mu(vm, vd, vs1, vs2, vl); } -vint8m2_t test_vnmsub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint8m2_t test_vnmsub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m2_mu(vm, vd, rs1, vs2, vl); } -vint8m4_t test_vnmsub_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsub_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m4_mu(vm, vd, vs1, vs2, vl); } -vint8m4_t test_vnmsub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint8m4_t test_vnmsub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m4_mu(vm, vd, rs1, vs2, vl); } -vint8m8_t test_vnmsub_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsub_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i8m8_mu(vm, vd, vs1, vs2, vl); } -vint8m8_t test_vnmsub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, int8_t rs1, vint8m8_t vs2, size_t vl) { +vint8m8_t test_vnmsub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, int8_t rs1, + vint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i8m8_mu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vnmsub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i16mf4_mu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vnmsub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint16mf4_t test_vnmsub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16mf4_mu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vnmsub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i16mf2_mu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vnmsub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint16mf2_t test_vnmsub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16mf2_mu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vnmsub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m1_mu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vnmsub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint16m1_t test_vnmsub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m1_mu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vnmsub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m2_mu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vnmsub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint16m2_t test_vnmsub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m2_mu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vnmsub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m4_mu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vnmsub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint16m4_t test_vnmsub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m4_mu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vnmsub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i16m8_mu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vnmsub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int16_t rs1, vint16m8_t vs2, size_t vl) { +vint16m8_t test_vnmsub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int16_t rs1, + vint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i16m8_mu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vnmsub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_i32mf2_mu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vnmsub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint32mf2_t test_vnmsub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32mf2_mu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vnmsub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m1_mu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vnmsub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint32m1_t test_vnmsub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m1_mu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vnmsub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m2_mu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vnmsub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint32m2_t test_vnmsub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m2_mu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vnmsub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m4_mu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vnmsub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint32m4_t test_vnmsub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m4_mu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vnmsub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i32m8_mu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vnmsub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int32_t rs1, vint32m8_t vs2, size_t vl) { +vint32m8_t test_vnmsub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int32_t rs1, + vint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i32m8_mu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vnmsub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m1_mu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vnmsub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int64_t rs1, vint64m1_t vs2, size_t vl) { +vint64m1_t test_vnmsub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int64_t rs1, + vint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m1_mu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vnmsub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m2_mu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vnmsub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int64_t rs1, vint64m2_t vs2, size_t vl) { +vint64m2_t test_vnmsub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int64_t rs1, + vint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m2_mu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vnmsub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m4_mu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vnmsub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int64_t rs1, vint64m4_t vs2, size_t vl) { +vint64m4_t test_vnmsub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int64_t rs1, + vint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m4_mu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vnmsub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_i64m8_mu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vnmsub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int64_t rs1, vint64m8_t vs2, size_t vl) { +vint64m8_t test_vnmsub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int64_t rs1, + vint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_i64m8_mu(vm, vd, rs1, vs2, vl); } -vuint8mf8_t test_vnmsub_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsub_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf8_mu(vm, vd, vs1, vs2, vl); } -vuint8mf8_t test_vnmsub_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint8mf8_t test_vnmsub_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf8_mu(vm, vd, rs1, vs2, vl); } -vuint8mf4_t test_vnmsub_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsub_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf4_mu(vm, vd, vs1, vs2, vl); } -vuint8mf4_t test_vnmsub_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint8mf4_t test_vnmsub_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf4_mu(vm, vd, rs1, vs2, vl); } -vuint8mf2_t test_vnmsub_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsub_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u8mf2_mu(vm, vd, vs1, vs2, vl); } -vuint8mf2_t test_vnmsub_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint8mf2_t test_vnmsub_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8mf2_mu(vm, vd, rs1, vs2, vl); } -vuint8m1_t test_vnmsub_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsub_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m1_mu(vm, vd, vs1, vs2, vl); } -vuint8m1_t test_vnmsub_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint8m1_t test_vnmsub_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m1_mu(vm, vd, rs1, vs2, vl); } -vuint8m2_t test_vnmsub_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsub_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m2_mu(vm, vd, vs1, vs2, vl); } -vuint8m2_t test_vnmsub_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint8m2_t test_vnmsub_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m2_mu(vm, vd, rs1, vs2, vl); } -vuint8m4_t test_vnmsub_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsub_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m4_mu(vm, vd, vs1, vs2, vl); } -vuint8m4_t test_vnmsub_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint8m4_t test_vnmsub_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m4_mu(vm, vd, rs1, vs2, vl); } -vuint8m8_t test_vnmsub_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsub_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vv_u8m8_mu(vm, vd, vs1, vs2, vl); } -vuint8m8_t test_vnmsub_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, vuint8m8_t vs2, size_t vl) { +vuint8m8_t test_vnmsub_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, uint8_t rs1, + vuint8m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u8m8_mu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vnmsub_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsub_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16mf4_mu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vnmsub_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint16mf4_t test_vnmsub_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u16mf4_mu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vnmsub_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsub_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16mf2_mu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vnmsub_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint16mf2_t test_vnmsub_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u16mf2_mu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vnmsub_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsub_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m1_mu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vnmsub_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint16m1_t test_vnmsub_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m1_mu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vnmsub_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsub_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m2_mu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vnmsub_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint16m2_t test_vnmsub_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m2_mu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vnmsub_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsub_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m4_mu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vnmsub_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint16m4_t test_vnmsub_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m4_mu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vnmsub_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsub_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs1, vuint16m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u16m8_mu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vnmsub_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, vuint16m8_t vs2, size_t vl) { +vuint16m8_t test_vnmsub_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint16_t rs1, + vuint16m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u16m8_mu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vnmsub_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsub_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32mf2_mu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vnmsub_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint32mf2_t test_vnmsub_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vnmsub_vx_u32mf2_mu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vnmsub_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsub_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m1_mu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vnmsub_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint32m1_t test_vnmsub_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m1_mu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vnmsub_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsub_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m2_mu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vnmsub_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint32m2_t test_vnmsub_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m2_mu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vnmsub_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsub_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m4_mu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vnmsub_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint32m4_t test_vnmsub_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m4_mu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vnmsub_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsub_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs1, vuint32m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u32m8_mu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vnmsub_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, vuint32m8_t vs2, size_t vl) { +vuint32m8_t test_vnmsub_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint32_t rs1, + vuint32m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u32m8_mu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vnmsub_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsub_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs1, vuint64m1_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m1_mu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vnmsub_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, vuint64m1_t vs2, size_t vl) { +vuint64m1_t test_vnmsub_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint64_t rs1, + vuint64m1_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m1_mu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vnmsub_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsub_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs1, vuint64m2_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m2_mu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vnmsub_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, vuint64m2_t vs2, size_t vl) { +vuint64m2_t test_vnmsub_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint64_t rs1, + vuint64m2_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m2_mu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vnmsub_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsub_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs1, vuint64m4_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m4_mu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vnmsub_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, vuint64m4_t vs2, size_t vl) { +vuint64m4_t test_vnmsub_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint64_t rs1, + vuint64m4_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m4_mu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vnmsub_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsub_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs1, vuint64m8_t vs2, + size_t vl) { return __riscv_vnmsub_vv_u64m8_mu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vnmsub_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, vuint64m8_t vs2, size_t vl) { +vuint64m8_t test_vnmsub_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint64_t rs1, + vuint64m8_t vs2, size_t vl) { return __riscv_vnmsub_vx_u64m8_mu(vm, vd, rs1, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnot.c b/auto-generated/policy_funcs/llvm-api-tests/vnot.c index a8bdd7e0a..15576f509 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnot.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnot.c @@ -121,11 +121,13 @@ vuint8m8_t test_vnot_v_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs, size_t vl) { return __riscv_vnot_v_u8m8_tu(vd, vs, vl); } -vuint16mf4_t test_vnot_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs, size_t vl) { +vuint16mf4_t test_vnot_v_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs, + size_t vl) { return __riscv_vnot_v_u16mf4_tu(vd, vs, vl); } -vuint16mf2_t test_vnot_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs, size_t vl) { +vuint16mf2_t test_vnot_v_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs, + size_t vl) { return __riscv_vnot_v_u16mf2_tu(vd, vs, vl); } @@ -145,7 +147,8 @@ vuint16m8_t test_vnot_v_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs, size_t vl) { return __riscv_vnot_v_u16m8_tu(vd, vs, vl); } -vuint32mf2_t test_vnot_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs, size_t vl) { +vuint32mf2_t test_vnot_v_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs, + size_t vl) { return __riscv_vnot_v_u32mf2_tu(vd, vs, vl); } @@ -181,530 +184,662 @@ vuint64m8_t test_vnot_v_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs, size_t vl) { return __riscv_vnot_v_u64m8_tu(vd, vs, vl); } -vint8mf8_t test_vnot_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, size_t vl) { +vint8mf8_t test_vnot_v_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, + size_t vl) { return __riscv_vnot_v_i8mf8_tum(vm, vd, vs, vl); } -vint8mf4_t test_vnot_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, size_t vl) { +vint8mf4_t test_vnot_v_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, + size_t vl) { return __riscv_vnot_v_i8mf4_tum(vm, vd, vs, vl); } -vint8mf2_t test_vnot_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, size_t vl) { +vint8mf2_t test_vnot_v_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, + size_t vl) { return __riscv_vnot_v_i8mf2_tum(vm, vd, vs, vl); } -vint8m1_t test_vnot_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, size_t vl) { +vint8m1_t test_vnot_v_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, + size_t vl) { return __riscv_vnot_v_i8m1_tum(vm, vd, vs, vl); } -vint8m2_t test_vnot_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, size_t vl) { +vint8m2_t test_vnot_v_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, + size_t vl) { return __riscv_vnot_v_i8m2_tum(vm, vd, vs, vl); } -vint8m4_t test_vnot_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, size_t vl) { +vint8m4_t test_vnot_v_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, + size_t vl) { return __riscv_vnot_v_i8m4_tum(vm, vd, vs, vl); } -vint8m8_t test_vnot_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, size_t vl) { +vint8m8_t test_vnot_v_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, + size_t vl) { return __riscv_vnot_v_i8m8_tum(vm, vd, vs, vl); } -vint16mf4_t test_vnot_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, size_t vl) { +vint16mf4_t test_vnot_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, + size_t vl) { return __riscv_vnot_v_i16mf4_tum(vm, vd, vs, vl); } -vint16mf2_t test_vnot_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, size_t vl) { +vint16mf2_t test_vnot_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, + size_t vl) { return __riscv_vnot_v_i16mf2_tum(vm, vd, vs, vl); } -vint16m1_t test_vnot_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, size_t vl) { +vint16m1_t test_vnot_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, + size_t vl) { return __riscv_vnot_v_i16m1_tum(vm, vd, vs, vl); } -vint16m2_t test_vnot_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, size_t vl) { +vint16m2_t test_vnot_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, + size_t vl) { return __riscv_vnot_v_i16m2_tum(vm, vd, vs, vl); } -vint16m4_t test_vnot_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, size_t vl) { +vint16m4_t test_vnot_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, + size_t vl) { return __riscv_vnot_v_i16m4_tum(vm, vd, vs, vl); } -vint16m8_t test_vnot_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, size_t vl) { +vint16m8_t test_vnot_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, + size_t vl) { return __riscv_vnot_v_i16m8_tum(vm, vd, vs, vl); } -vint32mf2_t test_vnot_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, size_t vl) { +vint32mf2_t test_vnot_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, + size_t vl) { return __riscv_vnot_v_i32mf2_tum(vm, vd, vs, vl); } -vint32m1_t test_vnot_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, size_t vl) { +vint32m1_t test_vnot_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, + size_t vl) { return __riscv_vnot_v_i32m1_tum(vm, vd, vs, vl); } -vint32m2_t test_vnot_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, size_t vl) { +vint32m2_t test_vnot_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, + size_t vl) { return __riscv_vnot_v_i32m2_tum(vm, vd, vs, vl); } -vint32m4_t test_vnot_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, size_t vl) { +vint32m4_t test_vnot_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, + size_t vl) { return __riscv_vnot_v_i32m4_tum(vm, vd, vs, vl); } -vint32m8_t test_vnot_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, size_t vl) { +vint32m8_t test_vnot_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, + size_t vl) { return __riscv_vnot_v_i32m8_tum(vm, vd, vs, vl); } -vint64m1_t test_vnot_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, size_t vl) { +vint64m1_t test_vnot_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, + size_t vl) { return __riscv_vnot_v_i64m1_tum(vm, vd, vs, vl); } -vint64m2_t test_vnot_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, size_t vl) { +vint64m2_t test_vnot_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, + size_t vl) { return __riscv_vnot_v_i64m2_tum(vm, vd, vs, vl); } -vint64m4_t test_vnot_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, size_t vl) { +vint64m4_t test_vnot_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, + size_t vl) { return __riscv_vnot_v_i64m4_tum(vm, vd, vs, vl); } -vint64m8_t test_vnot_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, size_t vl) { +vint64m8_t test_vnot_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, + size_t vl) { return __riscv_vnot_v_i64m8_tum(vm, vd, vs, vl); } -vuint8mf8_t test_vnot_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs, size_t vl) { +vuint8mf8_t test_vnot_v_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs, + size_t vl) { return __riscv_vnot_v_u8mf8_tum(vm, vd, vs, vl); } -vuint8mf4_t test_vnot_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs, size_t vl) { +vuint8mf4_t test_vnot_v_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs, + size_t vl) { return __riscv_vnot_v_u8mf4_tum(vm, vd, vs, vl); } -vuint8mf2_t test_vnot_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs, size_t vl) { +vuint8mf2_t test_vnot_v_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs, + size_t vl) { return __riscv_vnot_v_u8mf2_tum(vm, vd, vs, vl); } -vuint8m1_t test_vnot_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs, size_t vl) { +vuint8m1_t test_vnot_v_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs, + size_t vl) { return __riscv_vnot_v_u8m1_tum(vm, vd, vs, vl); } -vuint8m2_t test_vnot_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs, size_t vl) { +vuint8m2_t test_vnot_v_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs, + size_t vl) { return __riscv_vnot_v_u8m2_tum(vm, vd, vs, vl); } -vuint8m4_t test_vnot_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs, size_t vl) { +vuint8m4_t test_vnot_v_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs, + size_t vl) { return __riscv_vnot_v_u8m4_tum(vm, vd, vs, vl); } -vuint8m8_t test_vnot_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs, size_t vl) { +vuint8m8_t test_vnot_v_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs, + size_t vl) { return __riscv_vnot_v_u8m8_tum(vm, vd, vs, vl); } -vuint16mf4_t test_vnot_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs, size_t vl) { +vuint16mf4_t test_vnot_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs, size_t vl) { return __riscv_vnot_v_u16mf4_tum(vm, vd, vs, vl); } -vuint16mf2_t test_vnot_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs, size_t vl) { +vuint16mf2_t test_vnot_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs, size_t vl) { return __riscv_vnot_v_u16mf2_tum(vm, vd, vs, vl); } -vuint16m1_t test_vnot_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs, size_t vl) { +vuint16m1_t test_vnot_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs, + size_t vl) { return __riscv_vnot_v_u16m1_tum(vm, vd, vs, vl); } -vuint16m2_t test_vnot_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs, size_t vl) { +vuint16m2_t test_vnot_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs, + size_t vl) { return __riscv_vnot_v_u16m2_tum(vm, vd, vs, vl); } -vuint16m4_t test_vnot_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs, size_t vl) { +vuint16m4_t test_vnot_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs, + size_t vl) { return __riscv_vnot_v_u16m4_tum(vm, vd, vs, vl); } -vuint16m8_t test_vnot_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs, size_t vl) { +vuint16m8_t test_vnot_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs, + size_t vl) { return __riscv_vnot_v_u16m8_tum(vm, vd, vs, vl); } -vuint32mf2_t test_vnot_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs, size_t vl) { +vuint32mf2_t test_vnot_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs, size_t vl) { return __riscv_vnot_v_u32mf2_tum(vm, vd, vs, vl); } -vuint32m1_t test_vnot_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs, size_t vl) { +vuint32m1_t test_vnot_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs, + size_t vl) { return __riscv_vnot_v_u32m1_tum(vm, vd, vs, vl); } -vuint32m2_t test_vnot_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs, size_t vl) { +vuint32m2_t test_vnot_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs, + size_t vl) { return __riscv_vnot_v_u32m2_tum(vm, vd, vs, vl); } -vuint32m4_t test_vnot_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs, size_t vl) { +vuint32m4_t test_vnot_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs, + size_t vl) { return __riscv_vnot_v_u32m4_tum(vm, vd, vs, vl); } -vuint32m8_t test_vnot_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs, size_t vl) { +vuint32m8_t test_vnot_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs, + size_t vl) { return __riscv_vnot_v_u32m8_tum(vm, vd, vs, vl); } -vuint64m1_t test_vnot_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs, size_t vl) { +vuint64m1_t test_vnot_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs, + size_t vl) { return __riscv_vnot_v_u64m1_tum(vm, vd, vs, vl); } -vuint64m2_t test_vnot_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs, size_t vl) { +vuint64m2_t test_vnot_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs, + size_t vl) { return __riscv_vnot_v_u64m2_tum(vm, vd, vs, vl); } -vuint64m4_t test_vnot_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs, size_t vl) { +vuint64m4_t test_vnot_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs, + size_t vl) { return __riscv_vnot_v_u64m4_tum(vm, vd, vs, vl); } -vuint64m8_t test_vnot_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs, size_t vl) { +vuint64m8_t test_vnot_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs, + size_t vl) { return __riscv_vnot_v_u64m8_tum(vm, vd, vs, vl); } -vint8mf8_t test_vnot_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, size_t vl) { +vint8mf8_t test_vnot_v_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, + size_t vl) { return __riscv_vnot_v_i8mf8_tumu(vm, vd, vs, vl); } -vint8mf4_t test_vnot_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, size_t vl) { +vint8mf4_t test_vnot_v_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, + size_t vl) { return __riscv_vnot_v_i8mf4_tumu(vm, vd, vs, vl); } -vint8mf2_t test_vnot_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, size_t vl) { +vint8mf2_t test_vnot_v_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, + size_t vl) { return __riscv_vnot_v_i8mf2_tumu(vm, vd, vs, vl); } -vint8m1_t test_vnot_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, size_t vl) { +vint8m1_t test_vnot_v_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, + size_t vl) { return __riscv_vnot_v_i8m1_tumu(vm, vd, vs, vl); } -vint8m2_t test_vnot_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, size_t vl) { +vint8m2_t test_vnot_v_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, + size_t vl) { return __riscv_vnot_v_i8m2_tumu(vm, vd, vs, vl); } -vint8m4_t test_vnot_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, size_t vl) { +vint8m4_t test_vnot_v_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, + size_t vl) { return __riscv_vnot_v_i8m4_tumu(vm, vd, vs, vl); } -vint8m8_t test_vnot_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, size_t vl) { +vint8m8_t test_vnot_v_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, + size_t vl) { return __riscv_vnot_v_i8m8_tumu(vm, vd, vs, vl); } -vint16mf4_t test_vnot_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, size_t vl) { +vint16mf4_t test_vnot_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs, size_t vl) { return __riscv_vnot_v_i16mf4_tumu(vm, vd, vs, vl); } -vint16mf2_t test_vnot_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, size_t vl) { +vint16mf2_t test_vnot_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs, size_t vl) { return __riscv_vnot_v_i16mf2_tumu(vm, vd, vs, vl); } -vint16m1_t test_vnot_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, size_t vl) { +vint16m1_t test_vnot_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, + size_t vl) { return __riscv_vnot_v_i16m1_tumu(vm, vd, vs, vl); } -vint16m2_t test_vnot_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, size_t vl) { +vint16m2_t test_vnot_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, + size_t vl) { return __riscv_vnot_v_i16m2_tumu(vm, vd, vs, vl); } -vint16m4_t test_vnot_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, size_t vl) { +vint16m4_t test_vnot_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, + size_t vl) { return __riscv_vnot_v_i16m4_tumu(vm, vd, vs, vl); } -vint16m8_t test_vnot_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, size_t vl) { +vint16m8_t test_vnot_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, + size_t vl) { return __riscv_vnot_v_i16m8_tumu(vm, vd, vs, vl); } -vint32mf2_t test_vnot_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, size_t vl) { +vint32mf2_t test_vnot_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs, size_t vl) { return __riscv_vnot_v_i32mf2_tumu(vm, vd, vs, vl); } -vint32m1_t test_vnot_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, size_t vl) { +vint32m1_t test_vnot_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, + size_t vl) { return __riscv_vnot_v_i32m1_tumu(vm, vd, vs, vl); } -vint32m2_t test_vnot_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, size_t vl) { +vint32m2_t test_vnot_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, + size_t vl) { return __riscv_vnot_v_i32m2_tumu(vm, vd, vs, vl); } -vint32m4_t test_vnot_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, size_t vl) { +vint32m4_t test_vnot_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, + size_t vl) { return __riscv_vnot_v_i32m4_tumu(vm, vd, vs, vl); } -vint32m8_t test_vnot_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, size_t vl) { +vint32m8_t test_vnot_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, + size_t vl) { return __riscv_vnot_v_i32m8_tumu(vm, vd, vs, vl); } -vint64m1_t test_vnot_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, size_t vl) { +vint64m1_t test_vnot_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, + size_t vl) { return __riscv_vnot_v_i64m1_tumu(vm, vd, vs, vl); } -vint64m2_t test_vnot_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, size_t vl) { +vint64m2_t test_vnot_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, + size_t vl) { return __riscv_vnot_v_i64m2_tumu(vm, vd, vs, vl); } -vint64m4_t test_vnot_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, size_t vl) { +vint64m4_t test_vnot_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, + size_t vl) { return __riscv_vnot_v_i64m4_tumu(vm, vd, vs, vl); } -vint64m8_t test_vnot_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, size_t vl) { +vint64m8_t test_vnot_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, + size_t vl) { return __riscv_vnot_v_i64m8_tumu(vm, vd, vs, vl); } -vuint8mf8_t test_vnot_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs, size_t vl) { +vuint8mf8_t test_vnot_v_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs, + size_t vl) { return __riscv_vnot_v_u8mf8_tumu(vm, vd, vs, vl); } -vuint8mf4_t test_vnot_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs, size_t vl) { +vuint8mf4_t test_vnot_v_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs, + size_t vl) { return __riscv_vnot_v_u8mf4_tumu(vm, vd, vs, vl); } -vuint8mf2_t test_vnot_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs, size_t vl) { +vuint8mf2_t test_vnot_v_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs, + size_t vl) { return __riscv_vnot_v_u8mf2_tumu(vm, vd, vs, vl); } -vuint8m1_t test_vnot_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs, size_t vl) { +vuint8m1_t test_vnot_v_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs, + size_t vl) { return __riscv_vnot_v_u8m1_tumu(vm, vd, vs, vl); } -vuint8m2_t test_vnot_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs, size_t vl) { +vuint8m2_t test_vnot_v_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs, + size_t vl) { return __riscv_vnot_v_u8m2_tumu(vm, vd, vs, vl); } -vuint8m4_t test_vnot_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs, size_t vl) { +vuint8m4_t test_vnot_v_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs, + size_t vl) { return __riscv_vnot_v_u8m4_tumu(vm, vd, vs, vl); } -vuint8m8_t test_vnot_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs, size_t vl) { +vuint8m8_t test_vnot_v_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs, + size_t vl) { return __riscv_vnot_v_u8m8_tumu(vm, vd, vs, vl); } -vuint16mf4_t test_vnot_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs, size_t vl) { +vuint16mf4_t test_vnot_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs, size_t vl) { return __riscv_vnot_v_u16mf4_tumu(vm, vd, vs, vl); } -vuint16mf2_t test_vnot_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs, size_t vl) { +vuint16mf2_t test_vnot_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs, size_t vl) { return __riscv_vnot_v_u16mf2_tumu(vm, vd, vs, vl); } -vuint16m1_t test_vnot_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs, size_t vl) { +vuint16m1_t test_vnot_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs, + size_t vl) { return __riscv_vnot_v_u16m1_tumu(vm, vd, vs, vl); } -vuint16m2_t test_vnot_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs, size_t vl) { +vuint16m2_t test_vnot_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs, + size_t vl) { return __riscv_vnot_v_u16m2_tumu(vm, vd, vs, vl); } -vuint16m4_t test_vnot_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs, size_t vl) { +vuint16m4_t test_vnot_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs, + size_t vl) { return __riscv_vnot_v_u16m4_tumu(vm, vd, vs, vl); } -vuint16m8_t test_vnot_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs, size_t vl) { +vuint16m8_t test_vnot_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs, + size_t vl) { return __riscv_vnot_v_u16m8_tumu(vm, vd, vs, vl); } -vuint32mf2_t test_vnot_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs, size_t vl) { +vuint32mf2_t test_vnot_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs, size_t vl) { return __riscv_vnot_v_u32mf2_tumu(vm, vd, vs, vl); } -vuint32m1_t test_vnot_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs, size_t vl) { +vuint32m1_t test_vnot_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs, + size_t vl) { return __riscv_vnot_v_u32m1_tumu(vm, vd, vs, vl); } -vuint32m2_t test_vnot_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs, size_t vl) { +vuint32m2_t test_vnot_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs, + size_t vl) { return __riscv_vnot_v_u32m2_tumu(vm, vd, vs, vl); } -vuint32m4_t test_vnot_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs, size_t vl) { +vuint32m4_t test_vnot_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs, + size_t vl) { return __riscv_vnot_v_u32m4_tumu(vm, vd, vs, vl); } -vuint32m8_t test_vnot_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs, size_t vl) { +vuint32m8_t test_vnot_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs, + size_t vl) { return __riscv_vnot_v_u32m8_tumu(vm, vd, vs, vl); } -vuint64m1_t test_vnot_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs, size_t vl) { +vuint64m1_t test_vnot_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs, + size_t vl) { return __riscv_vnot_v_u64m1_tumu(vm, vd, vs, vl); } -vuint64m2_t test_vnot_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs, size_t vl) { +vuint64m2_t test_vnot_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs, + size_t vl) { return __riscv_vnot_v_u64m2_tumu(vm, vd, vs, vl); } -vuint64m4_t test_vnot_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs, size_t vl) { +vuint64m4_t test_vnot_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs, + size_t vl) { return __riscv_vnot_v_u64m4_tumu(vm, vd, vs, vl); } -vuint64m8_t test_vnot_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs, size_t vl) { +vuint64m8_t test_vnot_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs, + size_t vl) { return __riscv_vnot_v_u64m8_tumu(vm, vd, vs, vl); } -vint8mf8_t test_vnot_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, size_t vl) { +vint8mf8_t test_vnot_v_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs, + size_t vl) { return __riscv_vnot_v_i8mf8_mu(vm, vd, vs, vl); } -vint8mf4_t test_vnot_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, size_t vl) { +vint8mf4_t test_vnot_v_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs, + size_t vl) { return __riscv_vnot_v_i8mf4_mu(vm, vd, vs, vl); } -vint8mf2_t test_vnot_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, size_t vl) { +vint8mf2_t test_vnot_v_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs, + size_t vl) { return __riscv_vnot_v_i8mf2_mu(vm, vd, vs, vl); } -vint8m1_t test_vnot_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, size_t vl) { +vint8m1_t test_vnot_v_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs, + size_t vl) { return __riscv_vnot_v_i8m1_mu(vm, vd, vs, vl); } -vint8m2_t test_vnot_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, size_t vl) { +vint8m2_t test_vnot_v_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs, + size_t vl) { return __riscv_vnot_v_i8m2_mu(vm, vd, vs, vl); } -vint8m4_t test_vnot_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, size_t vl) { +vint8m4_t test_vnot_v_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs, + size_t vl) { return __riscv_vnot_v_i8m4_mu(vm, vd, vs, vl); } -vint8m8_t test_vnot_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, size_t vl) { +vint8m8_t test_vnot_v_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs, + size_t vl) { return __riscv_vnot_v_i8m8_mu(vm, vd, vs, vl); } -vint16mf4_t test_vnot_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, size_t vl) { +vint16mf4_t test_vnot_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs, + size_t vl) { return __riscv_vnot_v_i16mf4_mu(vm, vd, vs, vl); } -vint16mf2_t test_vnot_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, size_t vl) { +vint16mf2_t test_vnot_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs, + size_t vl) { return __riscv_vnot_v_i16mf2_mu(vm, vd, vs, vl); } -vint16m1_t test_vnot_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, size_t vl) { +vint16m1_t test_vnot_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs, + size_t vl) { return __riscv_vnot_v_i16m1_mu(vm, vd, vs, vl); } -vint16m2_t test_vnot_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, size_t vl) { +vint16m2_t test_vnot_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs, + size_t vl) { return __riscv_vnot_v_i16m2_mu(vm, vd, vs, vl); } -vint16m4_t test_vnot_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, size_t vl) { +vint16m4_t test_vnot_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs, + size_t vl) { return __riscv_vnot_v_i16m4_mu(vm, vd, vs, vl); } -vint16m8_t test_vnot_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, size_t vl) { +vint16m8_t test_vnot_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs, + size_t vl) { return __riscv_vnot_v_i16m8_mu(vm, vd, vs, vl); } -vint32mf2_t test_vnot_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, size_t vl) { +vint32mf2_t test_vnot_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs, + size_t vl) { return __riscv_vnot_v_i32mf2_mu(vm, vd, vs, vl); } -vint32m1_t test_vnot_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, size_t vl) { +vint32m1_t test_vnot_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs, + size_t vl) { return __riscv_vnot_v_i32m1_mu(vm, vd, vs, vl); } -vint32m2_t test_vnot_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, size_t vl) { +vint32m2_t test_vnot_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs, + size_t vl) { return __riscv_vnot_v_i32m2_mu(vm, vd, vs, vl); } -vint32m4_t test_vnot_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, size_t vl) { +vint32m4_t test_vnot_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs, + size_t vl) { return __riscv_vnot_v_i32m4_mu(vm, vd, vs, vl); } -vint32m8_t test_vnot_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, size_t vl) { +vint32m8_t test_vnot_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs, + size_t vl) { return __riscv_vnot_v_i32m8_mu(vm, vd, vs, vl); } -vint64m1_t test_vnot_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, size_t vl) { +vint64m1_t test_vnot_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs, + size_t vl) { return __riscv_vnot_v_i64m1_mu(vm, vd, vs, vl); } -vint64m2_t test_vnot_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, size_t vl) { +vint64m2_t test_vnot_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs, + size_t vl) { return __riscv_vnot_v_i64m2_mu(vm, vd, vs, vl); } -vint64m4_t test_vnot_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, size_t vl) { +vint64m4_t test_vnot_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs, + size_t vl) { return __riscv_vnot_v_i64m4_mu(vm, vd, vs, vl); } -vint64m8_t test_vnot_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, size_t vl) { +vint64m8_t test_vnot_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs, + size_t vl) { return __riscv_vnot_v_i64m8_mu(vm, vd, vs, vl); } -vuint8mf8_t test_vnot_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs, size_t vl) { +vuint8mf8_t test_vnot_v_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs, + size_t vl) { return __riscv_vnot_v_u8mf8_mu(vm, vd, vs, vl); } -vuint8mf4_t test_vnot_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs, size_t vl) { +vuint8mf4_t test_vnot_v_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs, + size_t vl) { return __riscv_vnot_v_u8mf4_mu(vm, vd, vs, vl); } -vuint8mf2_t test_vnot_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs, size_t vl) { +vuint8mf2_t test_vnot_v_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs, + size_t vl) { return __riscv_vnot_v_u8mf2_mu(vm, vd, vs, vl); } -vuint8m1_t test_vnot_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs, size_t vl) { +vuint8m1_t test_vnot_v_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs, + size_t vl) { return __riscv_vnot_v_u8m1_mu(vm, vd, vs, vl); } -vuint8m2_t test_vnot_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs, size_t vl) { +vuint8m2_t test_vnot_v_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs, + size_t vl) { return __riscv_vnot_v_u8m2_mu(vm, vd, vs, vl); } -vuint8m4_t test_vnot_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs, size_t vl) { +vuint8m4_t test_vnot_v_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs, + size_t vl) { return __riscv_vnot_v_u8m4_mu(vm, vd, vs, vl); } -vuint8m8_t test_vnot_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs, size_t vl) { +vuint8m8_t test_vnot_v_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs, + size_t vl) { return __riscv_vnot_v_u8m8_mu(vm, vd, vs, vl); } -vuint16mf4_t test_vnot_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs, size_t vl) { +vuint16mf4_t test_vnot_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs, size_t vl) { return __riscv_vnot_v_u16mf4_mu(vm, vd, vs, vl); } -vuint16mf2_t test_vnot_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs, size_t vl) { +vuint16mf2_t test_vnot_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs, size_t vl) { return __riscv_vnot_v_u16mf2_mu(vm, vd, vs, vl); } -vuint16m1_t test_vnot_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs, size_t vl) { +vuint16m1_t test_vnot_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs, + size_t vl) { return __riscv_vnot_v_u16m1_mu(vm, vd, vs, vl); } -vuint16m2_t test_vnot_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs, size_t vl) { +vuint16m2_t test_vnot_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs, + size_t vl) { return __riscv_vnot_v_u16m2_mu(vm, vd, vs, vl); } -vuint16m4_t test_vnot_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs, size_t vl) { +vuint16m4_t test_vnot_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs, + size_t vl) { return __riscv_vnot_v_u16m4_mu(vm, vd, vs, vl); } -vuint16m8_t test_vnot_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs, size_t vl) { +vuint16m8_t test_vnot_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs, + size_t vl) { return __riscv_vnot_v_u16m8_mu(vm, vd, vs, vl); } -vuint32mf2_t test_vnot_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs, size_t vl) { +vuint32mf2_t test_vnot_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs, size_t vl) { return __riscv_vnot_v_u32mf2_mu(vm, vd, vs, vl); } -vuint32m1_t test_vnot_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs, size_t vl) { +vuint32m1_t test_vnot_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs, + size_t vl) { return __riscv_vnot_v_u32m1_mu(vm, vd, vs, vl); } -vuint32m2_t test_vnot_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs, size_t vl) { +vuint32m2_t test_vnot_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs, + size_t vl) { return __riscv_vnot_v_u32m2_mu(vm, vd, vs, vl); } -vuint32m4_t test_vnot_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs, size_t vl) { +vuint32m4_t test_vnot_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs, + size_t vl) { return __riscv_vnot_v_u32m4_mu(vm, vd, vs, vl); } -vuint32m8_t test_vnot_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs, size_t vl) { +vuint32m8_t test_vnot_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs, + size_t vl) { return __riscv_vnot_v_u32m8_mu(vm, vd, vs, vl); } -vuint64m1_t test_vnot_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs, size_t vl) { +vuint64m1_t test_vnot_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs, + size_t vl) { return __riscv_vnot_v_u64m1_mu(vm, vd, vs, vl); } -vuint64m2_t test_vnot_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs, size_t vl) { +vuint64m2_t test_vnot_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs, + size_t vl) { return __riscv_vnot_v_u64m2_mu(vm, vd, vs, vl); } -vuint64m4_t test_vnot_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs, size_t vl) { +vuint64m4_t test_vnot_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs, + size_t vl) { return __riscv_vnot_v_u64m4_mu(vm, vd, vs, vl); } -vuint64m8_t test_vnot_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs, size_t vl) { +vuint64m8_t test_vnot_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs, + size_t vl) { return __riscv_vnot_v_u64m8_mu(vm, vd, vs, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnsra.c b/auto-generated/policy_funcs/llvm-api-tests/vnsra.c index 044d8eef0..49629eff0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnsra.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnsra.c @@ -5,482 +5,613 @@ #include -vint8mf8_t test_vnsra_wv_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vnsra_wv_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vnsra_wx_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vnsra_wx_i8mf8_tu(vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vnsra_wv_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vnsra_wv_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vnsra_wx_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vnsra_wx_i8mf4_tu(vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vnsra_wv_i8mf2_tu(vint8mf2_t vd, vint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vnsra_wv_i8mf2_tu(vint8mf2_t vd, vint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vnsra_wx_i8mf2_tu(vint8mf2_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vnsra_wx_i8mf2_tu(vint8mf2_t vd, vint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vnsra_wv_i8m1_tu(vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vnsra_wv_i8m1_tu(vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vnsra_wv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vnsra_wx_i8m1_tu(vint8m1_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vnsra_wx_i8m1_tu(vint8m1_t vd, vint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vnsra_wv_i8m2_tu(vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vnsra_wv_i8m2_tu(vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vnsra_wv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vnsra_wx_i8m2_tu(vint8m2_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vnsra_wx_i8m2_tu(vint8m2_t vd, vint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vnsra_wv_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vnsra_wv_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vnsra_wv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vnsra_wx_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vnsra_wx_i8m4_tu(vint8m4_t vd, vint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i8m4_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vnsra_wv_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vnsra_wv_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vnsra_wv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vnsra_wx_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vnsra_wx_i16mf4_tu(vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vnsra_wv_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vnsra_wv_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vnsra_wv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vnsra_wx_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vnsra_wx_i16mf2_tu(vint16mf2_t vd, vint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vnsra_wv_i16m1_tu(vint16m1_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vnsra_wv_i16m1_tu(vint16m1_t vd, vint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vnsra_wx_i16m1_tu(vint16m1_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vnsra_wx_i16m1_tu(vint16m1_t vd, vint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vnsra_wv_i16m2_tu(vint16m2_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vnsra_wv_i16m2_tu(vint16m2_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vnsra_wx_i16m2_tu(vint16m2_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vnsra_wx_i16m2_tu(vint16m2_t vd, vint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vnsra_wv_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vnsra_wv_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vnsra_wx_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vnsra_wx_i16m4_tu(vint16m4_t vd, vint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i16m4_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vnsra_wv_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vnsra_wv_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vnsra_wv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vnsra_wx_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vnsra_wx_i32mf2_tu(vint32mf2_t vd, vint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vnsra_wv_i32m1_tu(vint32m1_t vd, vint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vnsra_wv_i32m1_tu(vint32m1_t vd, vint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vnsra_wx_i32m1_tu(vint32m1_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vnsra_wx_i32m1_tu(vint32m1_t vd, vint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vnsra_wv_i32m2_tu(vint32m2_t vd, vint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vnsra_wv_i32m2_tu(vint32m2_t vd, vint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vnsra_wx_i32m2_tu(vint32m2_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vnsra_wx_i32m2_tu(vint32m2_t vd, vint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vnsra_wv_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vnsra_wv_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vnsra_wx_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vnsra_wx_i32m4_tu(vint32m4_t vd, vint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsra_wx_i32m4_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vnsra_wv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vnsra_wv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vnsra_wx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vnsra_wx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vnsra_wv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vnsra_wv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vnsra_wx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vnsra_wx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vnsra_wv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vnsra_wv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vnsra_wx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vnsra_wx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vnsra_wv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vnsra_wv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vnsra_wx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vnsra_wx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vnsra_wv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vnsra_wv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vnsra_wx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vnsra_wx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vnsra_wv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vnsra_wv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vnsra_wx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vnsra_wx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vnsra_wv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vnsra_wv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnsra_wv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vnsra_wx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vnsra_wx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vnsra_wv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vnsra_wv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnsra_wv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vnsra_wx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vnsra_wx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vnsra_wv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vnsra_wv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vnsra_wx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vnsra_wx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vnsra_wv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vnsra_wv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vnsra_wx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vnsra_wx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vnsra_wv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vnsra_wv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vnsra_wx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vnsra_wx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vnsra_wv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vnsra_wv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnsra_wv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vnsra_wx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vnsra_wx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vnsra_wv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vnsra_wv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vnsra_wx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vnsra_wx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vnsra_wv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vnsra_wv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vnsra_wx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vnsra_wx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vnsra_wv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vnsra_wv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vnsra_wx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vnsra_wx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vnsra_wv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vnsra_wv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnsra_wv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vnsra_wx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vnsra_wx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vnsra_wv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vnsra_wv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnsra_wv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vnsra_wx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vnsra_wx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vnsra_wv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vnsra_wv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vnsra_wx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vnsra_wx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vnsra_wv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vnsra_wv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vnsra_wx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vnsra_wx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vnsra_wv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vnsra_wv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vnsra_wx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vnsra_wx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vnsra_wv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vnsra_wv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vnsra_wx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vnsra_wx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vnsra_wv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vnsra_wv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnsra_wv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vnsra_wx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vnsra_wx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vnsra_wv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vnsra_wv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnsra_wv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vnsra_wx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vnsra_wx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vnsra_wv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vnsra_wv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vnsra_wx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vnsra_wx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vnsra_wv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vnsra_wv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vnsra_wx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vnsra_wx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vnsra_wv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vnsra_wv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vnsra_wx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vnsra_wx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vnsra_wv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vnsra_wv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnsra_wv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vnsra_wx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vnsra_wx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vnsra_wv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vnsra_wv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vnsra_wx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vnsra_wx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vnsra_wv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vnsra_wv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vnsra_wx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vnsra_wx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vnsra_wv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vnsra_wv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vnsra_wx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vnsra_wx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vnsra_wv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vnsra_wv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vnsra_wx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vnsra_wx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vnsra_wv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vnsra_wv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vnsra_wx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vnsra_wx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vnsra_wv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vnsra_wv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnsra_wv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vnsra_wx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vnsra_wx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vnsra_wv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vnsra_wv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vnsra_wx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vnsra_wx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vnsra_wv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vnsra_wv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vnsra_wx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vnsra_wx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vnsra_wv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vnsra_wv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vnsra_wx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vnsra_wx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vnsra_wv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vnsra_wv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnsra_wv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vnsra_wx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vnsra_wx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vnsra_wv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vnsra_wv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnsra_wv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vnsra_wx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vnsra_wx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vnsra_wv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vnsra_wv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vnsra_wx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vnsra_wx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vnsra_wv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vnsra_wv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vnsra_wx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vnsra_wx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vnsra_wv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vnsra_wv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vnsra_wx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vnsra_wx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vnsra_wv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vnsra_wv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnsra_wv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vnsra_wx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vnsra_wx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vnsra_wv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vnsra_wv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vnsra_wx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vnsra_wx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vnsra_wv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vnsra_wv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vnsra_wx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vnsra_wx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vnsra_wv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vnsra_wv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnsra_wv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vnsra_wx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vnsra_wx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsra_wx_i32m4_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnsrl.c b/auto-generated/policy_funcs/llvm-api-tests/vnsrl.c index a1ab49a37..09652f43a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnsrl.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnsrl.c @@ -5,482 +5,636 @@ #include -vuint8mf8_t test_vnsrl_wv_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vnsrl_wv_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vnsrl_wx_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vnsrl_wx_u8mf8_tu(vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vnsrl_wv_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vnsrl_wv_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vnsrl_wx_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vnsrl_wx_u8mf4_tu(vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vnsrl_wv_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vnsrl_wv_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vnsrl_wx_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vnsrl_wx_u8mf2_tu(vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vnsrl_wv_u8m1_tu(vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vnsrl_wv_u8m1_tu(vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vnsrl_wx_u8m1_tu(vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vnsrl_wx_u8m1_tu(vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vnsrl_wv_u8m2_tu(vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vnsrl_wv_u8m2_tu(vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vnsrl_wx_u8m2_tu(vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vnsrl_wx_u8m2_tu(vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vnsrl_wv_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vnsrl_wv_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vnsrl_wx_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vnsrl_wx_u8m4_tu(vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u8m4_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vnsrl_wv_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vnsrl_wv_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vnsrl_wx_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vnsrl_wx_u16mf4_tu(vuint16mf4_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vnsrl_wv_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vnsrl_wv_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vnsrl_wx_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vnsrl_wx_u16mf2_tu(vuint16mf2_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vnsrl_wv_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vnsrl_wv_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vnsrl_wv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vnsrl_wx_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vnsrl_wx_u16m1_tu(vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vnsrl_wv_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vnsrl_wv_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vnsrl_wx_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vnsrl_wx_u16m2_tu(vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vnsrl_wv_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vnsrl_wv_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vnsrl_wx_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vnsrl_wx_u16m4_tu(vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u16m4_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vnsrl_wv_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vnsrl_wv_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vnsrl_wx_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vnsrl_wx_u32mf2_tu(vuint32mf2_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vnsrl_wv_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vnsrl_wv_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vnsrl_wv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vnsrl_wx_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vnsrl_wx_u32m1_tu(vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vnsrl_wv_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vnsrl_wv_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vnsrl_wx_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vnsrl_wx_u32m2_tu(vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vnsrl_wv_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vnsrl_wv_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vnsrl_wx_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vnsrl_wx_u32m4_tu(vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u32m4_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vnsrl_wv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vnsrl_wv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vnsrl_wx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vnsrl_wx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vnsrl_wv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vnsrl_wv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vnsrl_wx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vnsrl_wx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vnsrl_wv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vnsrl_wv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vnsrl_wx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vnsrl_wx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vnsrl_wv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vnsrl_wv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vnsrl_wx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vnsrl_wx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vnsrl_wv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vnsrl_wv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vnsrl_wx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vnsrl_wx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vnsrl_wv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vnsrl_wv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vnsrl_wx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vnsrl_wx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vnsrl_wv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vnsrl_wv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vnsrl_wx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vnsrl_wx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vnsrl_wv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vnsrl_wv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vnsrl_wx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vnsrl_wx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vnsrl_wv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vnsrl_wv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vnsrl_wx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vnsrl_wx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vnsrl_wv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vnsrl_wv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vnsrl_wx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vnsrl_wx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vnsrl_wv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vnsrl_wv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vnsrl_wx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vnsrl_wx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vnsrl_wv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vnsrl_wv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vnsrl_wx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vnsrl_wx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vnsrl_wv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vnsrl_wv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vnsrl_wx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vnsrl_wx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vnsrl_wv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vnsrl_wv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vnsrl_wx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vnsrl_wx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vnsrl_wv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vnsrl_wv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vnsrl_wx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vnsrl_wx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vnsrl_wv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vnsrl_wv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vnsrl_wx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vnsrl_wx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vnsrl_wv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vnsrl_wv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vnsrl_wx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vnsrl_wx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vnsrl_wv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vnsrl_wv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vnsrl_wx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vnsrl_wx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vnsrl_wv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vnsrl_wv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vnsrl_wx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vnsrl_wx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vnsrl_wv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vnsrl_wv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vnsrl_wx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vnsrl_wx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vnsrl_wv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vnsrl_wv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vnsrl_wx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vnsrl_wx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vnsrl_wv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vnsrl_wv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vnsrl_wx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vnsrl_wx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vnsrl_wx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vnsrl_wv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vnsrl_wv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vnsrl_wx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vnsrl_wx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vnsrl_wv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vnsrl_wv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vnsrl_wx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vnsrl_wx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vnsrl_wv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vnsrl_wv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vnsrl_wx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vnsrl_wx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vnsrl_wv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vnsrl_wv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vnsrl_wx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vnsrl_wx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vnsrl_wv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vnsrl_wv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vnsrl_wx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vnsrl_wx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vnsrl_wv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vnsrl_wv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vnsrl_wx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vnsrl_wx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vnsrl_wv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vnsrl_wv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vnsrl_wx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vnsrl_wx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vnsrl_wv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vnsrl_wv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vnsrl_wx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vnsrl_wx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vnsrl_wv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vnsrl_wv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vnsrl_wx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vnsrl_wx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vnsrl_wv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vnsrl_wv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vnsrl_wx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vnsrl_wx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vnsrl_wv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vnsrl_wv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vnsrl_wx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vnsrl_wx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vnsrl_wv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vnsrl_wv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vnsrl_wx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vnsrl_wx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vnsrl_wv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vnsrl_wv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vnsrl_wx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vnsrl_wx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vnsrl_wv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vnsrl_wv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vnsrl_wx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vnsrl_wx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vnsrl_wv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vnsrl_wv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vnsrl_wx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vnsrl_wx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vnsrl_wv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vnsrl_wv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vnsrl_wx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vnsrl_wx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vnsrl_wv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vnsrl_wv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vnsrl_wx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vnsrl_wx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vnsrl_wv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vnsrl_wv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vnsrl_wv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vnsrl_wx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vnsrl_wx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vnsrl_wv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vnsrl_wv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vnsrl_wx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vnsrl_wx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vnsrl_wv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vnsrl_wv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vnsrl_wx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vnsrl_wx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vnsrl_wv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vnsrl_wv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vnsrl_wx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vnsrl_wx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vnsrl_wv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vnsrl_wv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vnsrl_wv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vnsrl_wx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vnsrl_wx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vnsrl_wv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vnsrl_wv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vnsrl_wv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vnsrl_wx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vnsrl_wx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vnsrl_wx_u32m4_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vor.c b/auto-generated/policy_funcs/llvm-api-tests/vor.c index 2be7f3bfe..fc6a30a00 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vor.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vor.c @@ -5,1410 +5,1789 @@ #include -vint8mf8_t test_vor_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vor_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vor_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vor_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vor_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vor_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vor_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vor_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vor_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vor_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vor_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vor_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vor_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vor_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vor_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vor_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vor_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vor_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vor_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vor_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vor_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vor_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vor_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vor_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vor_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vor_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vor_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vor_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vor_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vor_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vor_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vor_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vor_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vor_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vor_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vor_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vor_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vor_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vor_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vor_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vor_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vor_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vor_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vor_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vor_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vor_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vor_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vor_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vor_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vor_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vor_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vor_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vor_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vor_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vor_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vor_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vor_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vor_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vor_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vor_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vor_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vor_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vor_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vor_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vor_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vor_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vor_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vor_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vor_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vor_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vor_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vor_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vor_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vor_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vor_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vor_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vor_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vor_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vor_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vor_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vor_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vor_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vor_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vor_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vor_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vor_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vor_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vor_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vor_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vor_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vor_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vor_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vor_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vor_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vor_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vor_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vor_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vor_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vor_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vor_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vor_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vor_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vor_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vor_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vor_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vor_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vor_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vor_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vor_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vor_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vor_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vor_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vor_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vor_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vor_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vor_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vor_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vor_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vor_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vor_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vor_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vor_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vor_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vor_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vor_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vor_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vor_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vor_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vor_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vor_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vor_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vor_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vor_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vor_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vor_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vor_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vor_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vor_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vor_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vor_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vor_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vor_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vor_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vor_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vor_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vor_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vor_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vor_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vor_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vor_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vor_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vor_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vor_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vor_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vor_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vor_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vor_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vor_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vor_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vor_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vor_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vor_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vor_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vor_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vor_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vor_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vor_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vor_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vor_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vor_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vor_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vor_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vor_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vor_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vor_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vor_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vor_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vor_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vor_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vor_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vor_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vor_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vor_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vor_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vor_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vor_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vor_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vor_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vor_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vor_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vor_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vor_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vor_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vor_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vor_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vor_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vor_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vor_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vor_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vor_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vor_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vor_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vor_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vor_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vor_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vor_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vor_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vor_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vor_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vor_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vor_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vor_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vor_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vor_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vor_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vor_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vor_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vor_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vor_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vor_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vor_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vor_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vor_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vor_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vor_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vor_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vor_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vor_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vor_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vor_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vor_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vor_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vor_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vor_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vor_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vor_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vor_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vor_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vor_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vor_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vor_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vor_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vor_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vor_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vor_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vor_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vor_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vor_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vor_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vor_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vor_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vor_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vor_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vor_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vor_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vor_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vor_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vor_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vor_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vor_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vor_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vor_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vor_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vor_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vor_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vor_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vor_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vor_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vor_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vor_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vor_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vor_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vor_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vor_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vor_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vor_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vor_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vor_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vor_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vor_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vor_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vor_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vor_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vor_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vor_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vor_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vor_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vor_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vor_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vor_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vor_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vor_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vor_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vor_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vor_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vor_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vor_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vor_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vor_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vor_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vor_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vor_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vor_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vor_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vor_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vor_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vor_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vor_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vor_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vor_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vor_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vor_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vor_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vor_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vor_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vor_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vor_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vor_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vor_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vor_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vor_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vor_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vor_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vor_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vor_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vor_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vor_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vor_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vor_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vor_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vor_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vor_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vor_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vor_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vor_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vor_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vor_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vor_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vor_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vor_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vor_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vor_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vor_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vor_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vor_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vor_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vor_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vor_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vor_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vor_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vor_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vor_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vor_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vor_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vor_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vor_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vor_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vor_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vor_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vor_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vor_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vor_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vor_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vor_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vor_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vor_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vor_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vor_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vor_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vor_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vor_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vor_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vor_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vor_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vor_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vor_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vor_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vor_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vor_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vor_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vor_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vor_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vor_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vor_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vor_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vor_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vor_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vor_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vor_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vor_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vor_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vor_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vor_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vor_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vor_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vor_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vor_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vor_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vor_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vor_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vor_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vor_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vor_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vor_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vor_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vor_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vor_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vor_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vor_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vor_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vor_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vor_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vor_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vor_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vor_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vor_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vor_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vor_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vor_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vor_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vor_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vor_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vor_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vor_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vor_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vor_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vor_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vor_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vor_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vor_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vor_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vor_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vor_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vor_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vor_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vor_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vor_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vor_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vor_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vor_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vor_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vor_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vor_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vor_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vor_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vor_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vor_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vor_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vor_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vor_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vor_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vor_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vor_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vor_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vor_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vor_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vor_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vor_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vor_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vor_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vor_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vor_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vor_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vor_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vor_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vor_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vor_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vor_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vor_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vor_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vor_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vor_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vor_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vor_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vor_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vor_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vor_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vor_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vor_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vor_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vor_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vor_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vor_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vor_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vor_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vor_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vor_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vor_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vor_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vor_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vor_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vor_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vor_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vor_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vor_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vor_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vor_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vor_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vor_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vor_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vor_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vor_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vor_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vor_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vor_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vor_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vor_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vor_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vor_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vor_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vor_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vor_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vor_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vor_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vor_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vor_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vor_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vor_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vor_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vor_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vor_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vor_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vor_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vor_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vor_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vor_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vor_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vor_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vor_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vor_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vor_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vor_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vor_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vor_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vor_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vor_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vor_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vor_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vor_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vor_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vor_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vor_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vor_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vor_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vor_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vor_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vor_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vor_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vor_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vor_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vor_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vor_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vor_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vor_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vor_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vor_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vor_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vor_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vor_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vor_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vor_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vor_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vor_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vor_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vor_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vor_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vor_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vor_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vor_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vor_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vor_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vor_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vor_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vor_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vor_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vor_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vor_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vor_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vor_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vor_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vor_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vor_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vor_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vor_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vor_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vor_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vor_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vor_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vor_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vor_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vor_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vor_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vor_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vor_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vor_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vor_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vor_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vor_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vor_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vor_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vor_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vor_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vor_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vor_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vor_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vor_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vor_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vor_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vor_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vor_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vor_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vor_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vor_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vor_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vor_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vor_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vor_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vor_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vor_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vor_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vor_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vor_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vor_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vor_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vor_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vor_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vor_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vor_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vor_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vor_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vor_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vor_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vor_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vor_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vor_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vor_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vor_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vor_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vor_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vor_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vor_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vor_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vor_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vor_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vor_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vor_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vor_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vor_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vor_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vor_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vor_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vor_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vor_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vor_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vor_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vor_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vor_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vor_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vor_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vor_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vor_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vor_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vor_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vor_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vor_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vor_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vor_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vor_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vor_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vor_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vor_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vor_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vor_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vor_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vor_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vor_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vor_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vor_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vor_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vor_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vor_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vor_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vor_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vor_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vor_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vor_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vor_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vor_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vor_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vor_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vor_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vor_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vor_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vor_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vor_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vor_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vor_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vor_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vor_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vor_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vor_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vor_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vor_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vor_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vor_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vor_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vor_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vor_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vor_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vor_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vor_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vor_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vor_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vor_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vor_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vor_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vor_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vor_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vor_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vor_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vor_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vor_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vor_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vor_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vor_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vor_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vor_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vor_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vor_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vor_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vor_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vor_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vor_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vor_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vor_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vor_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vor_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vor_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vor_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vor_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vor_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vor_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vor_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vor_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vor_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vor_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vor_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vor_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vor_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vor_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vor_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vor_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vor_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vor_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vor_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vor_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vor_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vor_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vor_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vor_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vor_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vor_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vor_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vor_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vor_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vor_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vor_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vor_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vor_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vor_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vor_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vor_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vor_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vor_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vor_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vor_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vor_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vor_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vor_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vor_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vor_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vor_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vor_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vor_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vor_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vor_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vor_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vor_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vor_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vor_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vor_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vor_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vor_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vor_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vor_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vor_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vor_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vor_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vor_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vor_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vor_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vor_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vor_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vor_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vor_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vor_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vor_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vor_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vor_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vor_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vor_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vor_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vor_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vor_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vor_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vor_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vor_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vor_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vor_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vor_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vor_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vor_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vor_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vor_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vor_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vor_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vor_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vor_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vor_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vor_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vor_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vor_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vor_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vor_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vor_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vor_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vor_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vor_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vor_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vor_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vor_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vor_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vor_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vor_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vor_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vor_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vor_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vor_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vor_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vor_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vor_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vor_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vor_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vor_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vor_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vor_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vor_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vor_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vor_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vor_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vor_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vor_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vor_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vor_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vor_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vor_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vor_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vor_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vor_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vor_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vor_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vor_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vor_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vor_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vor_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vor_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vor_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vor_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vor_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vor_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vor_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vor_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vor_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vor_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vor_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vor_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vor_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vor_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vor_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vor_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vor_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vor_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vor_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vor_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vor_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vor_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vor_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vor_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vor_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vor_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vor_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vor_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vor_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vor_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vor_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vor_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vor_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vor_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vor_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vor_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vor_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vor_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vor_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vor_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vor_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vor_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vor_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vor_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vor_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vor_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vor_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vor_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vor_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vor_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vor_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vor_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vor_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vor_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vor_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vor_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vor_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vor_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vor_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vor_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vor_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vor_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vor_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vor_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vredand.c b/auto-generated/policy_funcs/llvm-api-tests/vredand.c index 7c097f4d8..a1a92086c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vredand.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vredand.c @@ -5,354 +5,486 @@ #include -vint8m1_t test_vredand_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_i8mf8_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_i8mf4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_i8mf2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_i8m1_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_i8m2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_i8m4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_i8m8_i8m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_i16mf4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_i16mf2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_i16m1_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_i16m2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_i16m4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_i16m8_i16m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_i32mf2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_i32m1_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_i32m2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_i32m4_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_i32m8_i32m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredand_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredand_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredand_vs_i64m1_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredand_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredand_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredand_vs_i64m2_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredand_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredand_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredand_vs_i64m4_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredand_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredand_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredand_vs_i64m8_i64m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_u8mf8_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_u8mf4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_u8mf2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_u8m1_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_u8m2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_u8m4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredand_vs_u8m8_u8m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_u16mf4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_u16mf2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_u16m1_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_u16m2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_u16m4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredand_vs_u16m8_u16m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_u32mf2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_u32m1_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_u32m2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_u32m4_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredand_vs_u32m8_u32m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredand_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredand_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredand_vs_u64m1_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredand_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredand_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredand_vs_u64m2_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredand_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredand_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredand_vs_u64m4_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredand_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredand_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredand_vs_u64m8_u64m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, + vint8mf8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i8mf8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, + vint8mf4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i8mf4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, + vint8mf2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i8mf2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, + vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i8m1_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, + vint8m2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i8m2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, + vint8m4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i8m4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredand_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredand_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, + vint8m8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i8m8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, + vint16mf4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i16mf4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, + vint16mf2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i16mf2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i16m1_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, + vint16m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i16m2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, + vint16m4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i16m4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredand_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredand_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, + vint16m8_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i16m8_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, + vint32mf2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i32mf2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i32m1_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, + vint32m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i32m2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, + vint32m4_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i32m4_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredand_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredand_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, + vint32m8_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i32m8_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredand_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredand_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i64m1_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredand_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredand_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, + vint64m2_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i64m2_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredand_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredand_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, + vint64m4_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i64m4_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredand_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredand_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, + vint64m8_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredand_vs_i64m8_i64m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, + vuint8mf8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u8mf8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, + vuint8mf4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u8mf4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, + vuint8mf2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u8mf2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u8m1_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, + vuint8m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u8m2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, + vuint8m4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u8m4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredand_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredand_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, + vuint8m8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u8m8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, + vuint16mf4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u16mf4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, + vuint16mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u16mf2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u16m1_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, + vuint16m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u16m2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, + vuint16m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u16m4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredand_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredand_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, + vuint16m8_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u16m8_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, + vuint32mf2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u32mf2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u32m1_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, + vuint32m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u32m2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, + vuint32m4_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u32m4_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredand_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredand_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, + vuint32m8_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u32m8_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredand_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredand_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u64m1_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredand_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredand_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, + vuint64m2_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u64m2_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredand_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredand_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, + vuint64m4_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u64m4_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredand_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredand_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, + vuint64m8_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredand_vs_u64m8_u64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vredmax.c b/auto-generated/policy_funcs/llvm-api-tests/vredmax.c index 987c671a3..45d1dfebe 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vredmax.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vredmax.c @@ -5,178 +5,244 @@ #include -vint8m1_t test_vredmax_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i8mf8_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i8mf4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i8mf2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i8m1_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i8m2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i8m4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i8m8_i8m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i16mf4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i16mf2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i16m1_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i16m2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i16m4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i16m8_i16m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i32mf2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i32m1_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i32m2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i32m4_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i32m8_i32m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredmax_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmax_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i64m1_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredmax_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmax_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i64m2_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredmax_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmax_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i64m4_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredmax_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmax_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredmax_vs_i64m8_i64m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, + vint8mf8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i8mf8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, + vint8mf4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i8mf4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, + vint8mf2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i8mf2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, + vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i8m1_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, + vint8m2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i8m2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, + vint8m4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i8m4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmax_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmax_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, + vint8m8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i8m8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, + vint16mf4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i16mf4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, + vint16mf2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i16mf2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i16m1_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, + vint16m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i16m2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, + vint16m4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i16m4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmax_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmax_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, + vint16m8_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i16m8_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, + vint32mf2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i32mf2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i32m1_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, + vint32m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i32m2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, + vint32m4_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i32m4_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmax_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmax_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, + vint32m8_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i32m8_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredmax_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmax_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i64m1_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredmax_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmax_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, + vint64m2_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i64m2_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredmax_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmax_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, + vint64m4_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i64m4_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredmax_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmax_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, + vint64m8_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredmax_vs_i64m8_i64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vredmaxu.c b/auto-generated/policy_funcs/llvm-api-tests/vredmaxu.c index 649f42479..27a0040ad 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vredmaxu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vredmaxu.c @@ -5,178 +5,244 @@ #include -vuint8m1_t test_vredmaxu_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u8mf8_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u8mf4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u8mf2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u8m1_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u8m2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u8m4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u8m8_u8m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u16mf4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u16mf2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u16m1_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u16m2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u16m4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u16m8_u16m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u32mf2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u32m1_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u32m2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u32m4_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u32m8_u32m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredmaxu_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredmaxu_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u64m1_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredmaxu_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredmaxu_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u64m2_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredmaxu_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredmaxu_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u64m4_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredmaxu_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredmaxu_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredmaxu_vs_u64m8_u64m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, + vuint8mf8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u8mf8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, + vuint8mf4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u8mf4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, + vuint8mf2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u8mf2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u8m1_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, + vuint8m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u8m2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, + vuint8m4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u8m4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredmaxu_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredmaxu_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, + vuint8m8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u8m8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, + vuint16mf4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u16mf4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, + vuint16mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u16mf2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u16m1_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, + vuint16m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u16m2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, + vuint16m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u16m4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredmaxu_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredmaxu_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, + vuint16m8_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u16m8_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, + vuint32mf2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u32mf2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u32m1_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, + vuint32m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u32m2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, + vuint32m4_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u32m4_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredmaxu_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredmaxu_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, + vuint32m8_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u32m8_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredmaxu_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredmaxu_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u64m1_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredmaxu_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredmaxu_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, + vuint64m2_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u64m2_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredmaxu_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredmaxu_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, + vuint64m4_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u64m4_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredmaxu_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredmaxu_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, + vuint64m8_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredmaxu_vs_u64m8_u64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vredmin.c b/auto-generated/policy_funcs/llvm-api-tests/vredmin.c index 2888440b3..4b920121b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vredmin.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vredmin.c @@ -5,178 +5,244 @@ #include -vint8m1_t test_vredmin_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i8mf8_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i8mf4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i8mf2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i8m1_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i8m2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i8m4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i8m8_i8m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i16mf4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i16mf2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i16m1_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i16m2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i16m4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i16m8_i16m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i32mf2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i32m1_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i32m2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i32m4_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i32m8_i32m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredmin_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmin_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i64m1_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredmin_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmin_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i64m2_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredmin_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmin_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i64m4_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredmin_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmin_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredmin_vs_i64m8_i64m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, + vint8mf8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i8mf8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, + vint8mf4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i8mf4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, + vint8mf2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i8mf2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, + vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i8m1_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, + vint8m2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i8m2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, + vint8m4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i8m4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredmin_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredmin_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, + vint8m8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i8m8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, + vint16mf4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i16mf4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, + vint16mf2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i16mf2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i16m1_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, + vint16m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i16m2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, + vint16m4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i16m4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredmin_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredmin_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, + vint16m8_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i16m8_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, + vint32mf2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i32mf2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i32m1_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, + vint32m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i32m2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, + vint32m4_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i32m4_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredmin_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredmin_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, + vint32m8_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i32m8_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredmin_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmin_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i64m1_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredmin_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmin_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, + vint64m2_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i64m2_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredmin_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmin_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, + vint64m4_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i64m4_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredmin_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredmin_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, + vint64m8_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredmin_vs_i64m8_i64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vredminu.c b/auto-generated/policy_funcs/llvm-api-tests/vredminu.c index 5bfb77f93..96ade0849 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vredminu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vredminu.c @@ -5,178 +5,244 @@ #include -vuint8m1_t test_vredminu_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u8mf8_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u8mf4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u8mf2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u8m1_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u8m2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u8m4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u8m8_u8m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u16mf4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u16mf2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u16m1_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u16m2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u16m4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u16m8_u16m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u32mf2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u32m1_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u32m2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u32m4_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u32m8_u32m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredminu_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredminu_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u64m1_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredminu_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredminu_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u64m2_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredminu_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredminu_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u64m4_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredminu_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredminu_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredminu_vs_u64m8_u64m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, + vuint8mf8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u8mf8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, + vuint8mf4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u8mf4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, + vuint8mf2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u8mf2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u8m1_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, + vuint8m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u8m2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, + vuint8m4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u8m4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredminu_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredminu_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, + vuint8m8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u8m8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, + vuint16mf4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u16mf4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, + vuint16mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u16mf2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u16m1_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, + vuint16m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u16m2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, + vuint16m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u16m4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredminu_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredminu_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, + vuint16m8_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u16m8_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, + vuint32mf2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u32mf2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u32m1_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, + vuint32m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u32m2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, + vuint32m4_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u32m4_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredminu_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredminu_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, + vuint32m8_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u32m8_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredminu_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredminu_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u64m1_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredminu_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredminu_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, + vuint64m2_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u64m2_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredminu_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredminu_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, + vuint64m4_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u64m4_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredminu_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredminu_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, + vuint64m8_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredminu_vs_u64m8_u64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vredor.c b/auto-generated/policy_funcs/llvm-api-tests/vredor.c index 4db6ceec7..7c6dfa7d9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vredor.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vredor.c @@ -5,354 +5,482 @@ #include -vint8m1_t test_vredor_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8mf8_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8mf4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8mf2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8m1_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8m2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8m4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8m8_i8m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_i16mf4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_i16mf2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_i16m1_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_i16m2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_i16m4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_i16m8_i16m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_i32mf2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_i32m1_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_i32m2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_i32m4_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_i32m8_i32m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredor_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredor_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredor_vs_i64m1_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredor_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredor_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredor_vs_i64m2_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredor_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredor_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredor_vs_i64m4_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredor_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredor_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredor_vs_i64m8_i64m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_u8mf8_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_u8mf4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_u8mf2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_u8m1_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_u8m2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_u8m4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_u8m8_u8m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_u16mf4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_u16mf2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_u16m1_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_u16m2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_u16m4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredor_vs_u16m8_u16m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_u32mf2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_u32m1_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_u32m2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_u32m4_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredor_vs_u32m8_u32m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredor_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredor_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredor_vs_u64m1_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredor_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredor_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredor_vs_u64m2_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredor_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredor_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredor_vs_u64m4_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredor_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredor_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredor_vs_u64m8_u64m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, + vint8mf8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i8mf8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, + vint8mf4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i8mf4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, + vint8mf2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i8mf2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8m1_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, vint8m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8m2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, vint8m4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8m4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredor_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredor_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, vint8m8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredor_vs_i8m8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, + vint16mf4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i16mf4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, + vint16mf2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i16mf2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i16m1_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, + vint16m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i16m2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, + vint16m4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i16m4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredor_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredor_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, + vint16m8_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i16m8_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, + vint32mf2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i32mf2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i32m1_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, + vint32m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i32m2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, + vint32m4_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i32m4_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredor_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredor_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, + vint32m8_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i32m8_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredor_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredor_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i64m1_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredor_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredor_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, + vint64m2_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i64m2_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredor_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredor_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, + vint64m4_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i64m4_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredor_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredor_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, + vint64m8_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredor_vs_i64m8_i64m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, + vuint8mf8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u8mf8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, + vuint8mf4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u8mf4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, + vuint8mf2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u8mf2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u8m1_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, + vuint8m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u8m2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, + vuint8m4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u8m4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredor_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredor_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, + vuint8m8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u8m8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, + vuint16mf4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u16mf4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, + vuint16mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u16mf2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u16m1_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, + vuint16m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u16m2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, + vuint16m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u16m4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredor_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredor_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, + vuint16m8_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u16m8_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, + vuint32mf2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u32mf2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u32m1_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, + vuint32m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u32m2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, + vuint32m4_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u32m4_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredor_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredor_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, + vuint32m8_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u32m8_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredor_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredor_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u64m1_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredor_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredor_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, + vuint64m2_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u64m2_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredor_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredor_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, + vuint64m4_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u64m4_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredor_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredor_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, + vuint64m8_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredor_vs_u64m8_u64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vredsum.c b/auto-generated/policy_funcs/llvm-api-tests/vredsum.c index 19cf17ccd..d341f8e56 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vredsum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vredsum.c @@ -5,354 +5,486 @@ #include -vint8m1_t test_vredsum_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i8mf8_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i8mf4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i8mf2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i8m1_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i8m2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i8m4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i8m8_i8m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i16mf4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i16mf2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i16m1_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i16m2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i16m4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i16m8_i16m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i32mf2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i32m1_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i32m2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i32m4_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i32m8_i32m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredsum_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredsum_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i64m1_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredsum_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredsum_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i64m2_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredsum_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredsum_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i64m4_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredsum_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredsum_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredsum_vs_i64m8_i64m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u8mf8_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u8mf4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u8mf2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u8m1_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u8m2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u8m4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u8m8_u8m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u16mf4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u16mf2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u16m1_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u16m2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u16m4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u16m8_u16m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u32mf2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u32m1_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u32m2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u32m4_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u32m8_u32m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredsum_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredsum_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u64m1_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredsum_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredsum_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u64m2_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredsum_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredsum_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u64m4_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredsum_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredsum_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredsum_vs_u64m8_u64m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, + vint8mf8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i8mf8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, + vint8mf4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i8mf4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, + vint8mf2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i8mf2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, + vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i8m1_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, + vint8m2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i8m2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, + vint8m4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i8m4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredsum_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredsum_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, + vint8m8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i8m8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, + vint16mf4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i16mf4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, + vint16mf2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i16mf2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i16m1_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, + vint16m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i16m2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, + vint16m4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i16m4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredsum_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredsum_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, + vint16m8_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i16m8_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, + vint32mf2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i32mf2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i32m1_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, + vint32m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i32m2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, + vint32m4_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i32m4_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredsum_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredsum_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, + vint32m8_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i32m8_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredsum_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredsum_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i64m1_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredsum_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredsum_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, + vint64m2_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i64m2_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredsum_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredsum_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, + vint64m4_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i64m4_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredsum_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredsum_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, + vint64m8_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_i64m8_i64m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, + vuint8mf8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u8mf8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, + vuint8mf4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u8mf4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, + vuint8mf2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u8mf2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u8m1_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, + vuint8m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u8m2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, + vuint8m4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u8m4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredsum_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredsum_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, + vuint8m8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u8m8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, + vuint16mf4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u16mf4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, + vuint16mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u16mf2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u16m1_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, + vuint16m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u16m2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, + vuint16m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u16m4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredsum_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredsum_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, + vuint16m8_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u16m8_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, + vuint32mf2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u32mf2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u32m1_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, + vuint32m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u32m2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, + vuint32m4_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u32m4_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredsum_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredsum_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, + vuint32m8_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u32m8_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredsum_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredsum_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u64m1_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredsum_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredsum_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, + vuint64m2_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u64m2_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredsum_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredsum_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, + vuint64m4_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u64m4_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredsum_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredsum_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, + vuint64m8_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredsum_vs_u64m8_u64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vredxor.c b/auto-generated/policy_funcs/llvm-api-tests/vredxor.c index 157b8dd62..6082d9d51 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vredxor.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vredxor.c @@ -5,354 +5,486 @@ #include -vint8m1_t test_vredxor_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8mf8_i8m1_tu(vint8m1_t vd, vint8mf8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i8mf8_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8mf4_i8m1_tu(vint8m1_t vd, vint8mf4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i8mf4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8mf2_i8m1_tu(vint8m1_t vd, vint8mf2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i8mf2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8m1_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i8m1_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8m2_i8m1_tu(vint8m1_t vd, vint8m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i8m2_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8m4_i8m1_tu(vint8m1_t vd, vint8m4_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i8m4_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8m8_i8m1_tu(vint8m1_t vd, vint8m8_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i8m8_i8m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16mf4_i16m1_tu(vint16m1_t vd, vint16mf4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i16mf4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16mf2_i16m1_tu(vint16m1_t vd, vint16mf2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i16mf2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16m1_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i16m1_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16m2_i16m1_tu(vint16m1_t vd, vint16m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i16m2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16m4_i16m1_tu(vint16m1_t vd, vint16m4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i16m4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16m8_i16m1_tu(vint16m1_t vd, vint16m8_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i16m8_i16m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32mf2_i32m1_tu(vint32m1_t vd, vint32mf2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i32mf2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32m1_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i32m1_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32m2_i32m1_tu(vint32m1_t vd, vint32m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i32m2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32m4_i32m1_tu(vint32m1_t vd, vint32m4_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i32m4_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32m8_i32m1_tu(vint32m1_t vd, vint32m8_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i32m8_i32m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredxor_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredxor_vs_i64m1_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i64m1_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredxor_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredxor_vs_i64m2_i64m1_tu(vint64m1_t vd, vint64m2_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i64m2_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredxor_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredxor_vs_i64m4_i64m1_tu(vint64m1_t vd, vint64m4_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i64m4_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vredxor_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredxor_vs_i64m8_i64m1_tu(vint64m1_t vd, vint64m8_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vredxor_vs_i64m8_i64m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8mf8_u8m1_tu(vuint8m1_t vd, vuint8mf8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u8mf8_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8mf4_u8m1_tu(vuint8m1_t vd, vuint8mf4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u8mf4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8mf2_u8m1_tu(vuint8m1_t vd, vuint8mf2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u8mf2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8m1_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u8m1_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8m2_u8m1_tu(vuint8m1_t vd, vuint8m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u8m2_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8m4_u8m1_tu(vuint8m1_t vd, vuint8m4_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u8m4_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8m8_u8m1_tu(vuint8m1_t vd, vuint8m8_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u8m8_u8m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16mf4_u16m1_tu(vuint16m1_t vd, vuint16mf4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u16mf4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16mf2_u16m1_tu(vuint16m1_t vd, vuint16mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u16mf2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16m1_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u16m1_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16m2_u16m1_tu(vuint16m1_t vd, vuint16m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u16m2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16m4_u16m1_tu(vuint16m1_t vd, vuint16m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u16m4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16m8_u16m1_tu(vuint16m1_t vd, vuint16m8_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u16m8_u16m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32mf2_u32m1_tu(vuint32m1_t vd, vuint32mf2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u32mf2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32m1_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u32m1_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32m2_u32m1_tu(vuint32m1_t vd, vuint32m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u32m2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32m4_u32m1_tu(vuint32m1_t vd, vuint32m4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u32m4_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32m8_u32m1_tu(vuint32m1_t vd, vuint32m8_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u32m8_u32m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredxor_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredxor_vs_u64m1_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u64m1_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredxor_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredxor_vs_u64m2_u64m1_tu(vuint64m1_t vd, vuint64m2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u64m2_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredxor_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredxor_vs_u64m4_u64m1_tu(vuint64m1_t vd, vuint64m4_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u64m4_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vredxor_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredxor_vs_u64m8_u64m1_tu(vuint64m1_t vd, vuint64m8_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vredxor_vs_u64m8_u64m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, vint8mf8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8mf8_i8m1_tum(vbool64_t vm, vint8m1_t vd, + vint8mf8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i8mf8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, vint8mf4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8mf4_i8m1_tum(vbool32_t vm, vint8m1_t vd, + vint8mf4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i8mf4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, vint8mf2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8mf2_i8m1_tum(vbool16_t vm, vint8m1_t vd, + vint8mf2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i8mf2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8m1_i8m1_tum(vbool8_t vm, vint8m1_t vd, + vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i8m1_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, vint8m2_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8m2_i8m1_tum(vbool4_t vm, vint8m1_t vd, + vint8m2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i8m2_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, vint8m4_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8m4_i8m1_tum(vbool2_t vm, vint8m1_t vd, + vint8m4_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i8m4_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vredxor_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, vint8m8_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vredxor_vs_i8m8_i8m1_tum(vbool1_t vm, vint8m1_t vd, + vint8m8_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i8m8_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, vint16mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16mf4_i16m1_tum(vbool64_t vm, vint16m1_t vd, + vint16mf4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i16mf4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, vint16mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16mf2_i16m1_tum(vbool32_t vm, vint16m1_t vd, + vint16mf2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i16mf2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16m1_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i16m1_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, vint16m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16m2_i16m1_tum(vbool8_t vm, vint16m1_t vd, + vint16m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i16m2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, vint16m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16m4_i16m1_tum(vbool4_t vm, vint16m1_t vd, + vint16m4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i16m4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vredxor_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, vint16m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vredxor_vs_i16m8_i16m1_tum(vbool2_t vm, vint16m1_t vd, + vint16m8_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i16m8_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, vint32mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32mf2_i32m1_tum(vbool64_t vm, vint32m1_t vd, + vint32mf2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i32mf2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32m1_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i32m1_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, vint32m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32m2_i32m1_tum(vbool16_t vm, vint32m1_t vd, + vint32m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i32m2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, vint32m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32m4_i32m1_tum(vbool8_t vm, vint32m1_t vd, + vint32m4_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i32m4_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vredxor_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, vint32m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vredxor_vs_i32m8_i32m1_tum(vbool4_t vm, vint32m1_t vd, + vint32m8_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i32m8_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredxor_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredxor_vs_i64m1_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i64m1_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredxor_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, vint64m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredxor_vs_i64m2_i64m1_tum(vbool32_t vm, vint64m1_t vd, + vint64m2_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i64m2_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredxor_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, vint64m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredxor_vs_i64m4_i64m1_tum(vbool16_t vm, vint64m1_t vd, + vint64m4_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i64m4_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vredxor_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, vint64m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vredxor_vs_i64m8_i64m1_tum(vbool8_t vm, vint64m1_t vd, + vint64m8_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_i64m8_i64m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, vuint8mf8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8mf8_u8m1_tum(vbool64_t vm, vuint8m1_t vd, + vuint8mf8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u8mf8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, vuint8mf4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8mf4_u8m1_tum(vbool32_t vm, vuint8m1_t vd, + vuint8mf4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u8mf4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, vuint8mf2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8mf2_u8m1_tum(vbool16_t vm, vuint8m1_t vd, + vuint8mf2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u8mf2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8m1_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u8m1_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, vuint8m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8m2_u8m1_tum(vbool4_t vm, vuint8m1_t vd, + vuint8m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u8m2_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, vuint8m4_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8m4_u8m1_tum(vbool2_t vm, vuint8m1_t vd, + vuint8m4_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u8m4_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vredxor_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, vuint8m8_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vredxor_vs_u8m8_u8m1_tum(vbool1_t vm, vuint8m1_t vd, + vuint8m8_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u8m8_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, vuint16mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16mf4_u16m1_tum(vbool64_t vm, vuint16m1_t vd, + vuint16mf4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u16mf4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, vuint16mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16mf2_u16m1_tum(vbool32_t vm, vuint16m1_t vd, + vuint16mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u16mf2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16m1_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u16m1_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, vuint16m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16m2_u16m1_tum(vbool8_t vm, vuint16m1_t vd, + vuint16m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u16m2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, vuint16m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16m4_u16m1_tum(vbool4_t vm, vuint16m1_t vd, + vuint16m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u16m4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vredxor_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, vuint16m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vredxor_vs_u16m8_u16m1_tum(vbool2_t vm, vuint16m1_t vd, + vuint16m8_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u16m8_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, vuint32mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32mf2_u32m1_tum(vbool64_t vm, vuint32m1_t vd, + vuint32mf2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u32mf2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32m1_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u32m1_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, vuint32m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32m2_u32m1_tum(vbool16_t vm, vuint32m1_t vd, + vuint32m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u32m2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, vuint32m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32m4_u32m1_tum(vbool8_t vm, vuint32m1_t vd, + vuint32m4_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u32m4_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vredxor_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, vuint32m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vredxor_vs_u32m8_u32m1_tum(vbool4_t vm, vuint32m1_t vd, + vuint32m8_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u32m8_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredxor_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredxor_vs_u64m1_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u64m1_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredxor_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, vuint64m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredxor_vs_u64m2_u64m1_tum(vbool32_t vm, vuint64m1_t vd, + vuint64m2_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u64m2_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredxor_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, vuint64m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredxor_vs_u64m4_u64m1_tum(vbool16_t vm, vuint64m1_t vd, + vuint64m4_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u64m4_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vredxor_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, vuint64m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vredxor_vs_u64m8_u64m1_tum(vbool8_t vm, vuint64m1_t vd, + vuint64m8_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vredxor_vs_u64m8_u64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vrem.c b/auto-generated/policy_funcs/llvm-api-tests/vrem.c index 1f72bfa87..5ac98ab28 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vrem.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vrem.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vrem_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vrem_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vrem_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vrem_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vrem_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrem_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vrem_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vrem_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vrem_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vrem_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vrem_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrem_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vrem_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vrem_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vrem_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vrem_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vrem_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrem_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vrem_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vrem_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vrem_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vrem_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vrem_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrem_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vrem_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vrem_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vrem_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vrem_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vrem_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrem_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vrem_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vrem_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vrem_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vrem_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vrem_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrem_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vrem_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vrem_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vrem_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vrem_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vrem_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrem_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vrem_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrem_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vrem_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vrem_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vrem_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrem_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vrem_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrem_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vrem_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vrem_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vrem_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrem_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vrem_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vrem_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vrem_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vrem_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vrem_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrem_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vrem_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vrem_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vrem_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vrem_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vrem_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrem_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vrem_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vrem_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vrem_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vrem_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vrem_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrem_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vrem_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vrem_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vrem_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vrem_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vrem_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrem_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vrem_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vrem_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vrem_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vrem_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vrem_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrem_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vrem_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vrem_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vrem_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vrem_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vrem_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrem_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vrem_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vrem_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vrem_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vrem_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vrem_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrem_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vrem_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vrem_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vrem_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vrem_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vrem_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrem_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vrem_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vrem_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vrem_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vrem_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vrem_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrem_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vrem_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vrem_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vrem_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vrem_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vrem_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vrem_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vrem_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vrem_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vrem_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vrem_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vrem_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vrem_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vrem_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vrem_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vrem_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vrem_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vrem_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vrem_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vrem_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vrem_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vrem_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vrem_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vrem_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vrem_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vrem_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vrem_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrem_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vrem_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vrem_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vrem_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrem_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vrem_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vrem_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vrem_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrem_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vrem_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vrem_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vrem_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vrem_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrem_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vrem_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vrem_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vrem_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vrem_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrem_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vrem_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vrem_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vrem_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vrem_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrem_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vrem_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vrem_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vrem_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vrem_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrem_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vrem_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vrem_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrem_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vrem_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrem_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vrem_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vrem_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrem_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vrem_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrem_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vrem_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vrem_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vrem_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vrem_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrem_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vrem_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vrem_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vrem_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vrem_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrem_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vrem_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vrem_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vrem_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vrem_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrem_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vrem_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vrem_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vrem_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vrem_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrem_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vrem_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vrem_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vrem_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vrem_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrem_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vrem_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vrem_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vrem_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vrem_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrem_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vrem_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vrem_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vrem_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vrem_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrem_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vrem_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vrem_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vrem_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vrem_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrem_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vrem_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vrem_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vrem_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vrem_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrem_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vrem_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vrem_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vrem_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vrem_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrem_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vrem_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vrem_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vrem_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vrem_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrem_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vrem_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vrem_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vrem_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vrem_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrem_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vrem_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vrem_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vrem_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vrem_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrem_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vrem_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vrem_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vrem_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrem_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vrem_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vrem_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vrem_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrem_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vrem_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vrem_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vrem_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrem_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vrem_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vrem_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vrem_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vrem_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrem_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vrem_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vrem_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vrem_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vrem_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrem_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vrem_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vrem_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vrem_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vrem_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrem_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vrem_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vrem_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vrem_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vrem_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrem_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vrem_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vrem_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrem_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vrem_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrem_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vrem_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vrem_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrem_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vrem_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrem_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vrem_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vrem_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vrem_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vrem_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrem_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vrem_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vrem_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vrem_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vrem_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrem_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vrem_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vrem_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vrem_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vrem_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrem_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vrem_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vrem_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vrem_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vrem_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrem_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vrem_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vrem_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vrem_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vrem_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrem_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vrem_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vrem_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vrem_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vrem_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrem_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vrem_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vrem_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vrem_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vrem_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrem_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vrem_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vrem_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vrem_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vrem_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrem_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vrem_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vrem_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vrem_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vrem_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrem_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vrem_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vrem_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vrem_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vrem_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrem_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vrem_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vrem_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vrem_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vrem_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrem_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vrem_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vrem_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vrem_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vrem_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrem_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vrem_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vrem_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vrem_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vrem_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrem_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vrem_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vrem_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vrem_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrem_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vrem_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vrem_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vrem_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrem_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vrem_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vrem_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vrem_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vrem_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrem_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vrem_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vrem_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vrem_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vrem_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrem_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vrem_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vrem_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vrem_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vrem_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrem_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vrem_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vrem_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vrem_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vrem_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrem_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vrem_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vrem_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vrem_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vrem_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrem_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vrem_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrem_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vrem_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrem_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vrem_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrem_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vrem_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vrem_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrem_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vrem_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrem_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vrem_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vrem_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vrem_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vrem_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrem_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vrem_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vrem_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vrem_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vrem_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrem_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vrem_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vrem_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vrem_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vrem_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrem_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vrem_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vrem_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vrem_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vrem_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrem_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vrem_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrem_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vrem_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vrem_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vrem_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrem_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vrem_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vrem_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vrem_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vrem_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrem_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vrem_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vrem_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vrem_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vrem_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrem_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vrem_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vrem_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vrem_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vrem_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrem_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vrem_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vrem_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vrem_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vrem_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrem_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vrem_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrem_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vrem_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vrem_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vrem_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrem_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vrem_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vrem_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vrem_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vrem_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrem_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vrem_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vrem_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vrem_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vrem_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrem_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vrem_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vrem_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vrem_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vrem_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrem_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vrem_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrem_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vremu.c b/auto-generated/policy_funcs/llvm-api-tests/vremu.c index ea8f3f8d6..fe92cd7e3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vremu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vremu.c @@ -5,706 +5,939 @@ #include -vuint8mf8_t test_vremu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vremu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vremu_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vremu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vremu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vremu_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vremu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vremu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vremu_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vremu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vremu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vremu_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vremu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vremu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vremu_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vremu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vremu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vremu_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vremu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vremu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vremu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vremu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vremu_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vremu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vremu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vremu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vremu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vremu_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vremu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vremu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vremu_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vremu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vremu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vremu_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vremu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vremu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vremu_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vremu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vremu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vremu_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vremu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vremu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vremu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vremu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vremu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vremu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vremu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vremu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vremu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vremu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vremu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vremu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vremu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vremu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vremu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vremu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vremu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vremu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vremu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vremu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vremu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vremu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vremu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vremu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vremu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vremu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vremu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vremu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vremu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vremu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vremu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vremu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vremu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vremu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vremu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vremu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vremu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vremu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vremu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vremu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vremu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vremu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vremu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vremu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vremu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vremu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vremu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vremu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vremu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vremu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vremu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vremu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vremu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vremu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vremu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vremu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vremu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vremu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vremu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vremu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vremu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vremu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vremu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vremu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vremu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vremu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vremu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vremu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vremu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vremu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vremu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vremu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vremu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vremu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vremu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vremu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vremu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vremu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vremu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vremu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vremu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vremu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vremu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vremu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vremu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vremu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vremu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vremu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vremu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vremu_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vremu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vremu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vremu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vremu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vremu_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vremu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vremu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vremu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vremu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vremu_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vremu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vremu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vremu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vremu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vremu_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vremu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vremu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vremu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vremu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vremu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vremu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vremu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vremu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vremu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vremu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vremu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vremu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vremu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vremu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vremu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vremu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vremu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vremu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vremu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vremu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vremu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vremu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vremu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vremu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vremu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vremu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vremu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vremu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vremu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vremu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vremu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vremu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vremu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vremu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vremu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vremu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vremu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vremu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vremu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vremu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vremu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vremu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vremu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vremu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vremu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vremu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vremu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vremu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vremu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vremu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vremu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vremu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vremu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vremu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vremu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vremu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vremu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vremu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vremu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vremu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vremu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vremu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vremu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vremu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vremu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vremu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vremu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vremu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vremu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vremu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vremu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vremu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vremu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vremu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vremu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vremu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vremu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vremu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vremu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vremu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vremu_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vremu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vremu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vremu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vremu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vremu_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vremu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vremu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vremu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vremu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vremu_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vremu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vremu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vremu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vremu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vremu_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vremu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vremu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vremu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vremu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vremu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vremu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vremu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vremu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vremu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vremu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vremu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vremu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vremu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vremu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vremu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vremu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vremu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vremu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vremu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vremu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vremu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vremu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vremu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vremu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vremu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vremu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vremu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vremu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vremu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vremu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vremu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vremu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vremu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vremu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vremu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vremu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vremu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vremu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vremu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vremu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vremu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vremu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vremu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vremu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vremu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vremu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vremu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vremu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vremu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vremu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vremu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vremu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vremu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vremu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vremu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vremu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vremu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vremu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vremu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vremu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vremu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vremu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vremu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vremu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vremu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vremu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vremu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vremu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vremu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vremu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vremu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vremu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vremu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vremu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vremu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vremu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vremu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vremu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vremu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vremu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vremu_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vremu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vremu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vremu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vremu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vremu_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vremu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vremu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vremu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vremu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vremu_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vremu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vremu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vremu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vremu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vremu_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vremu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vremu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vremu_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vremu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vremu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vremu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vremu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vremu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vremu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vremu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vremu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vremu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vremu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vremu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vremu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vremu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vremu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vremu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vremu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vremu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vremu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vremu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vremu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vremu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vremu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vremu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vremu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vremu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vremu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vremu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vremu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vremu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vremu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vremu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vremu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vremu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vremu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vremu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vremu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vremu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vremu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vremu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vremu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vremu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vremu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vremu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vremu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vremu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vremu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vremu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vremu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vremu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vremu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vremu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vremu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vremu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vremu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vremu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vremu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vremu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vremu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vremu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vremu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vremu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vremu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vremu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vremu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vremu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vremu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vremu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vremu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vremu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vremu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vremu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vremu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vremu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vremu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vremu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vrgather.c b/auto-generated/policy_funcs/llvm-api-tests/vrgather.c index f43b185b7..4d719d97e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vrgather.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vrgather.c @@ -6,1890 +6,2588 @@ #include -vfloat16mf4_t test_vrgather_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vrgather_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgather_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgather_vx_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vs1, size_t vl) { +vfloat16mf4_t test_vrgather_vx_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgather_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vrgather_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgather_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgather_vx_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vs1, size_t vl) { +vfloat16mf2_t test_vrgather_vx_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgather_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat16m1_t test_vrgather_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgather_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgather_vx_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t vs1, size_t vl) { +vfloat16m1_t test_vrgather_vx_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgather_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat16m2_t test_vrgather_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgather_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgather_vx_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t vs1, size_t vl) { +vfloat16m2_t test_vrgather_vx_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgather_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat16m4_t test_vrgather_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgather_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgather_vx_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t vs1, size_t vl) { +vfloat16m4_t test_vrgather_vx_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgather_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vfloat16m8_t test_vrgather_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgather_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgather_vx_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t vs1, size_t vl) { +vfloat16m8_t test_vrgather_vx_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f16m8_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgather_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vrgather_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vrgather_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgather_vx_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vs1, size_t vl) { +vfloat32mf2_t test_vrgather_vx_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgather_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vfloat32m1_t test_vrgather_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vrgather_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgather_vx_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t vs1, size_t vl) { +vfloat32m1_t test_vrgather_vx_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgather_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vfloat32m2_t test_vrgather_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vrgather_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgather_vx_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t vs1, size_t vl) { +vfloat32m2_t test_vrgather_vx_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgather_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vfloat32m4_t test_vrgather_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrgather_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgather_vx_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t vs1, size_t vl) { +vfloat32m4_t test_vrgather_vx_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgather_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vfloat32m8_t test_vrgather_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrgather_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgather_vx_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t vs1, size_t vl) { +vfloat32m8_t test_vrgather_vx_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f32m8_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgather_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vfloat64m1_t test_vrgather_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vrgather_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgather_vx_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t vs1, size_t vl) { +vfloat64m1_t test_vrgather_vx_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgather_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vfloat64m2_t test_vrgather_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vrgather_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgather_vx_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t vs1, size_t vl) { +vfloat64m2_t test_vrgather_vx_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgather_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vfloat64m4_t test_vrgather_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vrgather_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgather_vx_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t vs1, size_t vl) { +vfloat64m4_t test_vrgather_vx_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgather_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vfloat64m8_t test_vrgather_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrgather_vv_f64m8_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgather_vx_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t vs1, size_t vl) { +vfloat64m8_t test_vrgather_vx_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_f64m8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vrgather_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vrgather_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vrgather_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vrgather_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t vs1, size_t vl) { +vint8mf8_t test_vrgather_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vrgather_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vrgather_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vrgather_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vrgather_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t vs1, size_t vl) { +vint8mf4_t test_vrgather_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vrgather_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vrgather_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vrgather_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vrgather_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t vs1, size_t vl) { +vint8mf2_t test_vrgather_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i8mf2_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vrgather_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vrgather_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vrgather_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t vs1, size_t vl) { +vint8m1_t test_vrgather_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i8m1_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vrgather_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vrgather_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vrgather_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t vs1, size_t vl) { +vint8m2_t test_vrgather_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i8m2_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vrgather_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vrgather_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vrgather_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t vs1, size_t vl) { +vint8m4_t test_vrgather_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i8m4_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vrgather_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vrgather_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vrgather_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t vs1, size_t vl) { +vint8m8_t test_vrgather_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i8m8_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vrgather_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrgather_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgather_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vrgather_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t vs1, size_t vl) { +vint16mf4_t test_vrgather_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vrgather_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrgather_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgather_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vrgather_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t vs1, size_t vl) { +vint16mf2_t test_vrgather_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16mf2_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vrgather_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vrgather_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgather_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vrgather_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t vs1, size_t vl) { +vint16m1_t test_vrgather_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i16m1_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vrgather_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vrgather_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgather_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vrgather_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t vs1, size_t vl) { +vint16m2_t test_vrgather_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i16m2_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vrgather_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vrgather_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgather_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vrgather_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t vs1, size_t vl) { +vint16m4_t test_vrgather_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i16m4_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vrgather_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vrgather_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vrgather_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t vs1, size_t vl) { +vint16m8_t test_vrgather_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i16m8_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vrgather_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vrgather_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vrgather_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vrgather_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t vs1, size_t vl) { +vint32mf2_t test_vrgather_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32mf2_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vrgather_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vrgather_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vrgather_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vrgather_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t vs1, size_t vl) { +vint32m1_t test_vrgather_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i32m1_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vrgather_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vrgather_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vrgather_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vrgather_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t vs1, size_t vl) { +vint32m2_t test_vrgather_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i32m2_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vrgather_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vrgather_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrgather_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vrgather_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t vs1, size_t vl) { +vint32m4_t test_vrgather_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i32m4_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vrgather_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vrgather_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vrgather_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t vs1, size_t vl) { +vint32m8_t test_vrgather_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i32m8_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vrgather_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vrgather_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vrgather_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vrgather_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t vs1, size_t vl) { +vint64m1_t test_vrgather_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i64m1_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vrgather_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vrgather_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vrgather_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vrgather_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t vs1, size_t vl) { +vint64m2_t test_vrgather_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i64m2_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vrgather_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vrgather_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vrgather_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vrgather_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t vs1, size_t vl) { +vint64m4_t test_vrgather_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i64m4_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vrgather_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vrgather_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vrgather_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t vs1, size_t vl) { +vint64m8_t test_vrgather_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i64m8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgather_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrgather_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vrgather_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgather_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t vs1, size_t vl) { +vuint8mf8_t test_vrgather_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgather_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrgather_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vrgather_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgather_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t vs1, size_t vl) { +vuint8mf4_t test_vrgather_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgather_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrgather_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vrgather_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgather_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t vs1, size_t vl) { +vuint8mf2_t test_vrgather_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrgather_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrgather_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrgather_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vs1, size_t vl) { +vuint8m1_t test_vrgather_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrgather_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrgather_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrgather_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vs1, size_t vl) { +vuint8m2_t test_vrgather_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrgather_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrgather_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrgather_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vs1, size_t vl) { +vuint8m4_t test_vrgather_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vrgather_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrgather_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vrgather_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vs1, size_t vl) { +vuint8m8_t test_vrgather_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u8m8_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgather_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrgather_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgather_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgather_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t vs1, size_t vl) { +vuint16mf4_t test_vrgather_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgather_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrgather_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgather_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgather_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t vs1, size_t vl) { +vuint16mf2_t test_vrgather_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrgather_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrgather_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgather_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrgather_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t vs1, size_t vl) { +vuint16m1_t test_vrgather_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrgather_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrgather_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgather_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrgather_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t vs1, size_t vl) { +vuint16m2_t test_vrgather_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrgather_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrgather_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgather_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrgather_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t vs1, size_t vl) { +vuint16m4_t test_vrgather_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrgather_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrgather_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgather_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrgather_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t vs1, size_t vl) { +vuint16m8_t test_vrgather_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m8_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgather_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrgather_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vrgather_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgather_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t vs1, size_t vl) { +vuint32mf2_t test_vrgather_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrgather_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrgather_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vrgather_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrgather_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t vs1, size_t vl) { +vuint32m1_t test_vrgather_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrgather_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrgather_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vrgather_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrgather_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t vs1, size_t vl) { +vuint32m2_t test_vrgather_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrgather_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrgather_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrgather_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrgather_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t vs1, size_t vl) { +vuint32m4_t test_vrgather_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrgather_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrgather_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrgather_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrgather_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t vs1, size_t vl) { +vuint32m8_t test_vrgather_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrgather_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrgather_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vrgather_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrgather_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t vs1, size_t vl) { +vuint64m1_t test_vrgather_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrgather_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrgather_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vrgather_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrgather_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t vs1, size_t vl) { +vuint64m2_t test_vrgather_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrgather_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrgather_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vrgather_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrgather_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t vs1, size_t vl) { +vuint64m4_t test_vrgather_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrgather_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrgather_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrgather_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrgather_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t vs1, size_t vl) { +vuint64m8_t test_vrgather_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m8_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgather_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vrgather_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgather_vx_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vs1, size_t vl) { +vfloat16mf4_t test_vrgather_vx_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgather_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vrgather_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgather_vx_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vs1, size_t vl) { +vfloat16mf2_t test_vrgather_vx_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgather_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat16m1_t test_vrgather_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgather_vx_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vs1, size_t vl) { +vfloat16m1_t test_vrgather_vx_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgather_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat16m2_t test_vrgather_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgather_vx_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vs1, size_t vl) { +vfloat16m2_t test_vrgather_vx_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgather_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat16m4_t test_vrgather_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgather_vx_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vs1, size_t vl) { +vfloat16m4_t test_vrgather_vx_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgather_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vfloat16m8_t test_vrgather_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgather_vx_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vs1, size_t vl) { +vfloat16m8_t test_vrgather_vx_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgather_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vrgather_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgather_vx_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vs1, size_t vl) { +vfloat32mf2_t test_vrgather_vx_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgather_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vfloat32m1_t test_vrgather_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgather_vx_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vs1, size_t vl) { +vfloat32m1_t test_vrgather_vx_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgather_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vfloat32m2_t test_vrgather_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgather_vx_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vs1, size_t vl) { +vfloat32m2_t test_vrgather_vx_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgather_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vfloat32m4_t test_vrgather_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgather_vx_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vs1, size_t vl) { +vfloat32m4_t test_vrgather_vx_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgather_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vfloat32m8_t test_vrgather_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgather_vx_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vs1, size_t vl) { +vfloat32m8_t test_vrgather_vx_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgather_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vfloat64m1_t test_vrgather_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgather_vx_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vs1, size_t vl) { +vfloat64m1_t test_vrgather_vx_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgather_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vfloat64m2_t test_vrgather_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgather_vx_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vs1, size_t vl) { +vfloat64m2_t test_vrgather_vx_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgather_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vfloat64m4_t test_vrgather_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgather_vx_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vs1, size_t vl) { +vfloat64m4_t test_vrgather_vx_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgather_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vfloat64m8_t test_vrgather_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgather_vx_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vs1, size_t vl) { +vfloat64m8_t test_vrgather_vx_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgather_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vrgather_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgather_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t vs1, size_t vl) { +vint8mf8_t test_vrgather_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgather_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vrgather_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgather_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t vs1, size_t vl) { +vint8mf4_t test_vrgather_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgather_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vrgather_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgather_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t vs1, size_t vl) { +vint8mf2_t test_vrgather_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgather_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vrgather_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgather_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t vs1, size_t vl) { +vint8m1_t test_vrgather_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgather_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vrgather_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgather_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t vs1, size_t vl) { +vint8m2_t test_vrgather_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgather_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vrgather_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgather_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t vs1, size_t vl) { +vint8m4_t test_vrgather_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrgather_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vrgather_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrgather_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t vs1, size_t vl) { +vint8m8_t test_vrgather_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgather_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrgather_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgather_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t vs1, size_t vl) { +vint16mf4_t test_vrgather_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgather_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrgather_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgather_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t vs1, size_t vl) { +vint16mf2_t test_vrgather_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgather_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vrgather_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgather_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t vs1, size_t vl) { +vint16m1_t test_vrgather_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgather_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vrgather_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgather_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t vs1, size_t vl) { +vint16m2_t test_vrgather_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgather_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vrgather_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgather_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t vs1, size_t vl) { +vint16m4_t test_vrgather_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgather_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vrgather_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgather_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t vs1, size_t vl) { +vint16m8_t test_vrgather_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgather_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vrgather_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgather_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t vs1, size_t vl) { +vint32mf2_t test_vrgather_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgather_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vrgather_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgather_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t vs1, size_t vl) { +vint32m1_t test_vrgather_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgather_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vrgather_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgather_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t vs1, size_t vl) { +vint32m2_t test_vrgather_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgather_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vrgather_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgather_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t vs1, size_t vl) { +vint32m4_t test_vrgather_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgather_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vrgather_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgather_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t vs1, size_t vl) { +vint32m8_t test_vrgather_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgather_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vrgather_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgather_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t vs1, size_t vl) { +vint64m1_t test_vrgather_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgather_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vrgather_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgather_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t vs1, size_t vl) { +vint64m2_t test_vrgather_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgather_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vrgather_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgather_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t vs1, size_t vl) { +vint64m4_t test_vrgather_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgather_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vrgather_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgather_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t vs1, size_t vl) { +vint64m8_t test_vrgather_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgather_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrgather_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgather_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vs1, size_t vl) { +vuint8mf8_t test_vrgather_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgather_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrgather_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgather_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vs1, size_t vl) { +vuint8mf4_t test_vrgather_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgather_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrgather_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgather_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vs1, size_t vl) { +vuint8mf2_t test_vrgather_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgather_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrgather_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgather_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vs1, size_t vl) { +vuint8m1_t test_vrgather_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgather_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrgather_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgather_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vs1, size_t vl) { +vuint8m2_t test_vrgather_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgather_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrgather_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgather_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vs1, size_t vl) { +vuint8m4_t test_vrgather_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrgather_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrgather_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrgather_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vs1, size_t vl) { +vuint8m8_t test_vrgather_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgather_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrgather_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgather_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vs1, size_t vl) { +vuint16mf4_t test_vrgather_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgather_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrgather_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgather_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vs1, size_t vl) { +vuint16mf2_t test_vrgather_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgather_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrgather_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgather_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vs1, size_t vl) { +vuint16m1_t test_vrgather_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgather_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrgather_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgather_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vs1, size_t vl) { +vuint16m2_t test_vrgather_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgather_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrgather_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgather_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vs1, size_t vl) { +vuint16m4_t test_vrgather_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgather_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrgather_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgather_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vs1, size_t vl) { +vuint16m8_t test_vrgather_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgather_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrgather_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgather_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vs1, size_t vl) { +vuint32mf2_t test_vrgather_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgather_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrgather_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgather_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vs1, size_t vl) { +vuint32m1_t test_vrgather_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgather_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrgather_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgather_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vs1, size_t vl) { +vuint32m2_t test_vrgather_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgather_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrgather_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgather_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vs1, size_t vl) { +vuint32m4_t test_vrgather_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgather_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrgather_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgather_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vs1, size_t vl) { +vuint32m8_t test_vrgather_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgather_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrgather_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgather_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vs1, size_t vl) { +vuint64m1_t test_vrgather_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgather_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrgather_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgather_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vs1, size_t vl) { +vuint64m2_t test_vrgather_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgather_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrgather_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgather_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vs1, size_t vl) { +vuint64m4_t test_vrgather_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgather_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrgather_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgather_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vs1, size_t vl) { +vuint64m8_t test_vrgather_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgather_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vrgather_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgather_vx_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vs1, size_t vl) { +vfloat16mf4_t test_vrgather_vx_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgather_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vrgather_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgather_vx_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vs1, size_t vl) { +vfloat16mf2_t test_vrgather_vx_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgather_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat16m1_t test_vrgather_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgather_vx_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vs1, size_t vl) { +vfloat16m1_t test_vrgather_vx_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgather_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat16m2_t test_vrgather_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgather_vx_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vs1, size_t vl) { +vfloat16m2_t test_vrgather_vx_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgather_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat16m4_t test_vrgather_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgather_vx_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vs1, size_t vl) { +vfloat16m4_t test_vrgather_vx_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgather_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vfloat16m8_t test_vrgather_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgather_vx_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vs1, size_t vl) { +vfloat16m8_t test_vrgather_vx_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgather_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vrgather_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgather_vx_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vs1, size_t vl) { +vfloat32mf2_t test_vrgather_vx_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgather_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vfloat32m1_t test_vrgather_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgather_vx_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vs1, size_t vl) { +vfloat32m1_t test_vrgather_vx_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgather_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vfloat32m2_t test_vrgather_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgather_vx_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vs1, size_t vl) { +vfloat32m2_t test_vrgather_vx_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgather_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vfloat32m4_t test_vrgather_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgather_vx_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vs1, size_t vl) { +vfloat32m4_t test_vrgather_vx_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgather_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vfloat32m8_t test_vrgather_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgather_vx_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vs1, size_t vl) { +vfloat32m8_t test_vrgather_vx_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgather_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vfloat64m1_t test_vrgather_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgather_vx_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vs1, size_t vl) { +vfloat64m1_t test_vrgather_vx_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgather_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vfloat64m2_t test_vrgather_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgather_vx_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vs1, size_t vl) { +vfloat64m2_t test_vrgather_vx_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgather_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vfloat64m4_t test_vrgather_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgather_vx_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vs1, size_t vl) { +vfloat64m4_t test_vrgather_vx_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgather_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vfloat64m8_t test_vrgather_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgather_vx_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vs1, size_t vl) { +vfloat64m8_t test_vrgather_vx_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgather_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vrgather_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgather_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t vs1, size_t vl) { +vint8mf8_t test_vrgather_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgather_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vrgather_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgather_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t vs1, size_t vl) { +vint8mf4_t test_vrgather_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgather_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vrgather_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgather_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t vs1, size_t vl) { +vint8mf2_t test_vrgather_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgather_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vrgather_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgather_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t vs1, size_t vl) { +vint8m1_t test_vrgather_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgather_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vrgather_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgather_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t vs1, size_t vl) { +vint8m2_t test_vrgather_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgather_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vrgather_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgather_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t vs1, size_t vl) { +vint8m4_t test_vrgather_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrgather_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vrgather_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrgather_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t vs1, size_t vl) { +vint8m8_t test_vrgather_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgather_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrgather_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgather_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t vs1, size_t vl) { +vint16mf4_t test_vrgather_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgather_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrgather_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgather_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t vs1, size_t vl) { +vint16mf2_t test_vrgather_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgather_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vrgather_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgather_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t vs1, size_t vl) { +vint16m1_t test_vrgather_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgather_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vrgather_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgather_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t vs1, size_t vl) { +vint16m2_t test_vrgather_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgather_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vrgather_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgather_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t vs1, size_t vl) { +vint16m4_t test_vrgather_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgather_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vrgather_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgather_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t vs1, size_t vl) { +vint16m8_t test_vrgather_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgather_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vrgather_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgather_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t vs1, size_t vl) { +vint32mf2_t test_vrgather_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgather_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vrgather_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgather_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t vs1, size_t vl) { +vint32m1_t test_vrgather_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgather_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vrgather_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgather_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t vs1, size_t vl) { +vint32m2_t test_vrgather_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgather_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vrgather_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgather_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t vs1, size_t vl) { +vint32m4_t test_vrgather_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgather_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vrgather_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgather_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t vs1, size_t vl) { +vint32m8_t test_vrgather_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgather_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vrgather_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgather_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t vs1, size_t vl) { +vint64m1_t test_vrgather_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgather_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vrgather_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgather_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t vs1, size_t vl) { +vint64m2_t test_vrgather_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgather_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vrgather_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgather_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t vs1, size_t vl) { +vint64m4_t test_vrgather_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgather_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vrgather_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgather_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t vs1, size_t vl) { +vint64m8_t test_vrgather_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgather_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrgather_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgather_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vs1, size_t vl) { +vuint8mf8_t test_vrgather_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgather_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrgather_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgather_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vs1, size_t vl) { +vuint8mf4_t test_vrgather_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgather_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrgather_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgather_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vs1, size_t vl) { +vuint8mf2_t test_vrgather_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgather_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrgather_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgather_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vs1, size_t vl) { +vuint8m1_t test_vrgather_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgather_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrgather_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgather_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vs1, size_t vl) { +vuint8m2_t test_vrgather_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgather_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrgather_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgather_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vs1, size_t vl) { +vuint8m4_t test_vrgather_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrgather_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrgather_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrgather_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vs1, size_t vl) { +vuint8m8_t test_vrgather_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgather_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrgather_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgather_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vs1, size_t vl) { +vuint16mf4_t test_vrgather_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgather_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrgather_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgather_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vs1, size_t vl) { +vuint16mf2_t test_vrgather_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgather_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrgather_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgather_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vs1, size_t vl) { +vuint16m1_t test_vrgather_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgather_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrgather_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgather_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vs1, size_t vl) { +vuint16m2_t test_vrgather_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgather_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrgather_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgather_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vs1, size_t vl) { +vuint16m4_t test_vrgather_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgather_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrgather_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgather_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vs1, size_t vl) { +vuint16m8_t test_vrgather_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgather_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrgather_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgather_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vs1, size_t vl) { +vuint32mf2_t test_vrgather_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgather_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrgather_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgather_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vs1, size_t vl) { +vuint32m1_t test_vrgather_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgather_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrgather_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgather_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vs1, size_t vl) { +vuint32m2_t test_vrgather_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgather_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrgather_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgather_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vs1, size_t vl) { +vuint32m4_t test_vrgather_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgather_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrgather_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgather_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vs1, size_t vl) { +vuint32m8_t test_vrgather_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgather_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrgather_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgather_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vs1, size_t vl) { +vuint64m1_t test_vrgather_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgather_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrgather_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgather_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vs1, size_t vl) { +vuint64m2_t test_vrgather_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgather_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrgather_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgather_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vs1, size_t vl) { +vuint64m4_t test_vrgather_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgather_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrgather_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgather_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vs1, size_t vl) { +vuint64m8_t test_vrgather_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgather_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vrgather_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgather_vx_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t vs1, size_t vl) { +vfloat16mf4_t test_vrgather_vx_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgather_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vrgather_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgather_vx_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t vs1, size_t vl) { +vfloat16mf2_t test_vrgather_vx_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgather_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat16m1_t test_vrgather_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgather_vx_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t vs1, size_t vl) { +vfloat16m1_t test_vrgather_vx_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgather_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat16m2_t test_vrgather_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgather_vx_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t vs1, size_t vl) { +vfloat16m2_t test_vrgather_vx_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgather_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat16m4_t test_vrgather_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgather_vx_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t vs1, size_t vl) { +vfloat16m4_t test_vrgather_vx_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgather_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vfloat16m8_t test_vrgather_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgather_vx_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t vs1, size_t vl) { +vfloat16m8_t test_vrgather_vx_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgather_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vfloat32mf2_t test_vrgather_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgather_vx_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t vs1, size_t vl) { +vfloat32mf2_t test_vrgather_vx_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgather_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vfloat32m1_t test_vrgather_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgather_vx_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t vs1, size_t vl) { +vfloat32m1_t test_vrgather_vx_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgather_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vfloat32m2_t test_vrgather_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgather_vx_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t vs1, size_t vl) { +vfloat32m2_t test_vrgather_vx_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgather_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vfloat32m4_t test_vrgather_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgather_vx_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t vs1, size_t vl) { +vfloat32m4_t test_vrgather_vx_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgather_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vfloat32m8_t test_vrgather_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgather_vx_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t vs1, size_t vl) { +vfloat32m8_t test_vrgather_vx_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgather_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vfloat64m1_t test_vrgather_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgather_vx_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t vs1, size_t vl) { +vfloat64m1_t test_vrgather_vx_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgather_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vfloat64m2_t test_vrgather_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgather_vx_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t vs1, size_t vl) { +vfloat64m2_t test_vrgather_vx_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgather_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vfloat64m4_t test_vrgather_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgather_vx_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t vs1, size_t vl) { +vfloat64m4_t test_vrgather_vx_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgather_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vfloat64m8_t test_vrgather_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgather_vx_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t vs1, size_t vl) { +vfloat64m8_t test_vrgather_vx_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_f64m8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgather_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vrgather_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgather_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t vs1, size_t vl) { +vint8mf8_t test_vrgather_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgather_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vrgather_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgather_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t vs1, size_t vl) { +vint8mf4_t test_vrgather_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgather_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vrgather_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgather_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t vs1, size_t vl) { +vint8mf2_t test_vrgather_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgather_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vrgather_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgather_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t vs1, size_t vl) { +vint8m1_t test_vrgather_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgather_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vrgather_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgather_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t vs1, size_t vl) { +vint8m2_t test_vrgather_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgather_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vrgather_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgather_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t vs1, size_t vl) { +vint8m4_t test_vrgather_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrgather_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vrgather_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vrgather_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t vs1, size_t vl) { +vint8m8_t test_vrgather_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgather_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrgather_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgather_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t vs1, size_t vl) { +vint16mf4_t test_vrgather_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgather_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrgather_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgather_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t vs1, size_t vl) { +vint16mf2_t test_vrgather_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgather_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vrgather_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgather_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t vs1, size_t vl) { +vint16m1_t test_vrgather_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgather_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vrgather_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgather_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgather_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t vs1, size_t vl) { +vint16m2_t test_vrgather_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgather_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vrgather_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgather_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgather_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t vs1, size_t vl) { +vint16m4_t test_vrgather_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgather_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vrgather_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgather_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t vs1, size_t vl) { +vint16m8_t test_vrgather_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgather_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vrgather_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgather_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t vs1, size_t vl) { +vint32mf2_t test_vrgather_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgather_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vrgather_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgather_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t vs1, size_t vl) { +vint32m1_t test_vrgather_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgather_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vrgather_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgather_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t vs1, size_t vl) { +vint32m2_t test_vrgather_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgather_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vrgather_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vrgather_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgather_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t vs1, size_t vl) { +vint32m4_t test_vrgather_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgather_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vrgather_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgather_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t vs1, size_t vl) { +vint32m8_t test_vrgather_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgather_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vrgather_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgather_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t vs1, size_t vl) { +vint64m1_t test_vrgather_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgather_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vrgather_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgather_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t vs1, size_t vl) { +vint64m2_t test_vrgather_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgather_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vrgather_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgather_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t vs1, size_t vl) { +vint64m4_t test_vrgather_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgather_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vrgather_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vrgather_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgather_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t vs1, size_t vl) { +vint64m8_t test_vrgather_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_i64m8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgather_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vrgather_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgather_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t vs1, size_t vl) { +vuint8mf8_t test_vrgather_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgather_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vrgather_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgather_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t vs1, size_t vl) { +vuint8mf4_t test_vrgather_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgather_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vrgather_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgather_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t vs1, size_t vl) { +vuint8mf2_t test_vrgather_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgather_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vrgather_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgather_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t vs1, size_t vl) { +vuint8m1_t test_vrgather_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgather_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vrgather_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgather_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t vs1, size_t vl) { +vuint8m2_t test_vrgather_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgather_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vrgather_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgather_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t vs1, size_t vl) { +vuint8m4_t test_vrgather_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrgather_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vrgather_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vrgather_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vrgather_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t vs1, size_t vl) { +vuint8m8_t test_vrgather_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t vs1, size_t vl) { return __riscv_vrgather_vx_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgather_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrgather_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgather_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t vs1, size_t vl) { +vuint16mf4_t test_vrgather_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgather_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrgather_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgather_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t vs1, size_t vl) { +vuint16mf2_t test_vrgather_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgather_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrgather_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgather_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t vs1, size_t vl) { +vuint16m1_t test_vrgather_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgather_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrgather_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgather_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t vs1, size_t vl) { +vuint16m2_t test_vrgather_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgather_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrgather_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgather_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t vs1, size_t vl) { +vuint16m4_t test_vrgather_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgather_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrgather_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgather_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t vs1, size_t vl) { +vuint16m8_t test_vrgather_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgather_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vrgather_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgather_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t vs1, size_t vl) { +vuint32mf2_t test_vrgather_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t vs1, + size_t vl) { return __riscv_vrgather_vx_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgather_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vrgather_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgather_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t vs1, size_t vl) { +vuint32m1_t test_vrgather_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgather_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vrgather_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgather_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t vs1, size_t vl) { +vuint32m2_t test_vrgather_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgather_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vrgather_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgather_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t vs1, size_t vl) { +vuint32m4_t test_vrgather_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgather_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vrgather_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgather_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t vs1, size_t vl) { +vuint32m8_t test_vrgather_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgather_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vrgather_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgather_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t vs1, size_t vl) { +vuint64m1_t test_vrgather_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgather_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vrgather_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgather_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t vs1, size_t vl) { +vuint64m2_t test_vrgather_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgather_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vrgather_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgather_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t vs1, size_t vl) { +vuint64m4_t test_vrgather_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgather_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vrgather_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vrgather_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgather_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t vs1, size_t vl) { +vuint64m8_t test_vrgather_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t vs1, size_t vl) { return __riscv_vrgather_vx_u64m8_mu(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c b/auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c index dd182f15f..769b627ba 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c @@ -6,914 +6,1313 @@ #include -vfloat16mf4_t test_vrgatherei16_vv_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vrgatherei16_vv_f16mf4_tu(vfloat16mf4_t vd, + vfloat16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16mf4_tu(vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgatherei16_vv_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vrgatherei16_vv_f16mf2_tu(vfloat16mf2_t vd, + vfloat16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16mf2_tu(vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgatherei16_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat16m1_t test_vrgatherei16_vv_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16m1_tu(vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgatherei16_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat16m2_t test_vrgatherei16_vv_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16m2_tu(vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgatherei16_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat16m4_t test_vrgatherei16_vv_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16m4_tu(vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgatherei16_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vfloat16m8_t test_vrgatherei16_vv_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16m8_tu(vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgatherei16_vv_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vrgatherei16_vv_f32mf2_tu(vfloat32mf2_t vd, + vfloat32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f32mf2_tu(vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgatherei16_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vrgatherei16_vv_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f32m1_tu(vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgatherei16_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat32m2_t test_vrgatherei16_vv_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f32m2_tu(vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgatherei16_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat32m4_t test_vrgatherei16_vv_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f32m4_tu(vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgatherei16_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat32m8_t test_vrgatherei16_vv_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f32m8_tu(vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgatherei16_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat64m1_t test_vrgatherei16_vv_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f64m1_tu(vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgatherei16_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat64m2_t test_vrgatherei16_vv_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f64m2_tu(vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgatherei16_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat64m4_t test_vrgatherei16_vv_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f64m4_tu(vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgatherei16_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat64m8_t test_vrgatherei16_vv_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f64m8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vrgatherei16_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vuint16mf4_t vs1, size_t vl) { +vint8mf8_t test_vrgatherei16_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vrgatherei16_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vuint16mf2_t vs1, size_t vl) { +vint8mf4_t test_vrgatherei16_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vrgatherei16_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vint8mf2_t test_vrgatherei16_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vrgatherei16_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint16m2_t vs1, size_t vl) { +vint8m1_t test_vrgatherei16_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vrgatherei16_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint16m4_t vs1, size_t vl) { +vint8m2_t test_vrgatherei16_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vrgatherei16_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint16m8_t vs1, size_t vl) { +vint8m4_t test_vrgatherei16_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vrgatherei16_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrgatherei16_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vrgatherei16_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrgatherei16_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vrgatherei16_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vrgatherei16_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vrgatherei16_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vrgatherei16_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vrgatherei16_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vrgatherei16_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vrgatherei16_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vrgatherei16_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vrgatherei16_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vrgatherei16_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vrgatherei16_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint32m1_t test_vrgatherei16_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vrgatherei16_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint32m2_t test_vrgatherei16_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vrgatherei16_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint32m4_t test_vrgatherei16_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vrgatherei16_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint32m8_t test_vrgatherei16_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vrgatherei16_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vint64m1_t test_vrgatherei16_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vrgatherei16_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint64m2_t test_vrgatherei16_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vrgatherei16_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vint64m4_t test_vrgatherei16_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vrgatherei16_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vint64m8_t test_vrgatherei16_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i64m8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgatherei16_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint8mf8_t test_vrgatherei16_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgatherei16_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint8mf4_t test_vrgatherei16_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgatherei16_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint8mf2_t test_vrgatherei16_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vrgatherei16_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint16m2_t vs1, size_t vl) { +vuint8m1_t test_vrgatherei16_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vrgatherei16_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint16m4_t vs1, size_t vl) { +vuint8m2_t test_vrgatherei16_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vrgatherei16_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint16m8_t vs1, size_t vl) { +vuint8m4_t test_vrgatherei16_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgatherei16_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrgatherei16_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgatherei16_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrgatherei16_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vrgatherei16_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrgatherei16_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vrgatherei16_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrgatherei16_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vrgatherei16_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrgatherei16_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vrgatherei16_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrgatherei16_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgatherei16_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vrgatherei16_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vrgatherei16_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vrgatherei16_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vrgatherei16_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vrgatherei16_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vrgatherei16_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vrgatherei16_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vrgatherei16_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vrgatherei16_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vrgatherei16_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint64m1_t test_vrgatherei16_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vrgatherei16_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint64m2_t test_vrgatherei16_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vrgatherei16_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint64m4_t test_vrgatherei16_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vrgatherei16_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vuint64m8_t test_vrgatherei16_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u64m8_tu(vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgatherei16_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vrgatherei16_vv_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16mf4_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgatherei16_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vrgatherei16_vv_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgatherei16_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat16m1_t test_vrgatherei16_vv_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m1_tum(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgatherei16_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat16m2_t test_vrgatherei16_vv_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m2_tum(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgatherei16_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat16m4_t test_vrgatherei16_vv_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m4_tum(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgatherei16_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vfloat16m8_t test_vrgatherei16_vv_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m8_tum(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgatherei16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vrgatherei16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f32mf2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgatherei16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vrgatherei16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m1_tum(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgatherei16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat32m2_t test_vrgatherei16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m2_tum(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgatherei16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat32m4_t test_vrgatherei16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m4_tum(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgatherei16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat32m8_t test_vrgatherei16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m8_tum(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgatherei16_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat64m1_t test_vrgatherei16_vv_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m1_tum(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgatherei16_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat64m2_t test_vrgatherei16_vv_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m2_tum(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgatherei16_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat64m4_t test_vrgatherei16_vv_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m4_tum(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgatherei16_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat64m8_t test_vrgatherei16_vv_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgatherei16_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint16mf4_t vs1, size_t vl) { +vint8mf8_t test_vrgatherei16_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgatherei16_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint16mf2_t vs1, size_t vl) { +vint8mf4_t test_vrgatherei16_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgatherei16_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vint8mf2_t test_vrgatherei16_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgatherei16_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint16m2_t vs1, size_t vl) { +vint8m1_t test_vrgatherei16_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, + vint8m1_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgatherei16_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint16m4_t vs1, size_t vl) { +vint8m2_t test_vrgatherei16_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, + vint8m2_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgatherei16_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint16m8_t vs1, size_t vl) { +vint8m4_t test_vrgatherei16_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, + vint8m4_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgatherei16_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrgatherei16_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgatherei16_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrgatherei16_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgatherei16_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vrgatherei16_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgatherei16_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vrgatherei16_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgatherei16_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vrgatherei16_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgatherei16_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vrgatherei16_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgatherei16_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vrgatherei16_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgatherei16_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint32m1_t test_vrgatherei16_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgatherei16_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint32m2_t test_vrgatherei16_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgatherei16_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint32m4_t test_vrgatherei16_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgatherei16_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint32m8_t test_vrgatherei16_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgatherei16_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vint64m1_t test_vrgatherei16_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgatherei16_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint64m2_t test_vrgatherei16_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgatherei16_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vint64m4_t test_vrgatherei16_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgatherei16_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vint64m8_t test_vrgatherei16_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgatherei16_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint8mf8_t test_vrgatherei16_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgatherei16_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint8mf4_t test_vrgatherei16_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgatherei16_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint8mf2_t test_vrgatherei16_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgatherei16_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint16m2_t vs1, size_t vl) { +vuint8m1_t test_vrgatherei16_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgatherei16_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint16m4_t vs1, size_t vl) { +vuint8m2_t test_vrgatherei16_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgatherei16_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint16m8_t vs1, size_t vl) { +vuint8m4_t test_vrgatherei16_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgatherei16_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrgatherei16_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgatherei16_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrgatherei16_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgatherei16_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrgatherei16_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgatherei16_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrgatherei16_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgatherei16_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrgatherei16_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgatherei16_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrgatherei16_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgatherei16_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vrgatherei16_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgatherei16_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vrgatherei16_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgatherei16_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vrgatherei16_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgatherei16_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vrgatherei16_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgatherei16_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vrgatherei16_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgatherei16_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint64m1_t test_vrgatherei16_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgatherei16_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint64m2_t test_vrgatherei16_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgatherei16_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint64m4_t test_vrgatherei16_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgatherei16_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vuint64m8_t test_vrgatherei16_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgatherei16_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vrgatherei16_vv_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16mf4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgatherei16_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vrgatherei16_vv_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgatherei16_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat16m1_t test_vrgatherei16_vv_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgatherei16_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat16m2_t test_vrgatherei16_vv_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgatherei16_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat16m4_t test_vrgatherei16_vv_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgatherei16_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vfloat16m8_t test_vrgatherei16_vv_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgatherei16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vrgatherei16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f32mf2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgatherei16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vrgatherei16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgatherei16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat32m2_t test_vrgatherei16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgatherei16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat32m4_t test_vrgatherei16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgatherei16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat32m8_t test_vrgatherei16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgatherei16_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat64m1_t test_vrgatherei16_vv_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m1_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgatherei16_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat64m2_t test_vrgatherei16_vv_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m2_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgatherei16_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat64m4_t test_vrgatherei16_vv_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m4_tumu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgatherei16_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat64m8_t test_vrgatherei16_vv_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgatherei16_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint16mf4_t vs1, size_t vl) { +vint8mf8_t test_vrgatherei16_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgatherei16_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint16mf2_t vs1, size_t vl) { +vint8mf4_t test_vrgatherei16_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgatherei16_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vint8mf2_t test_vrgatherei16_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgatherei16_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint16m2_t vs1, size_t vl) { +vint8m1_t test_vrgatherei16_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + vint8m1_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgatherei16_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint16m4_t vs1, size_t vl) { +vint8m2_t test_vrgatherei16_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, + vint8m2_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgatherei16_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint16m8_t vs1, size_t vl) { +vint8m4_t test_vrgatherei16_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, + vint8m4_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgatherei16_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrgatherei16_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgatherei16_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrgatherei16_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgatherei16_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vrgatherei16_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgatherei16_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vrgatherei16_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgatherei16_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vrgatherei16_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgatherei16_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vrgatherei16_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgatherei16_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vrgatherei16_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgatherei16_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint32m1_t test_vrgatherei16_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgatherei16_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint32m2_t test_vrgatherei16_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgatherei16_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint32m4_t test_vrgatherei16_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgatherei16_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint32m8_t test_vrgatherei16_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgatherei16_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vint64m1_t test_vrgatherei16_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgatherei16_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint64m2_t test_vrgatherei16_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgatherei16_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vint64m4_t test_vrgatherei16_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgatherei16_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vint64m8_t test_vrgatherei16_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgatherei16_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint8mf8_t test_vrgatherei16_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgatherei16_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint8mf4_t test_vrgatherei16_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgatherei16_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint8mf2_t test_vrgatherei16_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgatherei16_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint16m2_t vs1, size_t vl) { +vuint8m1_t test_vrgatherei16_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgatherei16_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint16m4_t vs1, size_t vl) { +vuint8m2_t test_vrgatherei16_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgatherei16_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint16m8_t vs1, size_t vl) { +vuint8m4_t test_vrgatherei16_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgatherei16_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrgatherei16_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgatherei16_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrgatherei16_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgatherei16_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrgatherei16_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgatherei16_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrgatherei16_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgatherei16_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrgatherei16_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgatherei16_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrgatherei16_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgatherei16_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vrgatherei16_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgatherei16_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vrgatherei16_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgatherei16_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vrgatherei16_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgatherei16_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vrgatherei16_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgatherei16_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vrgatherei16_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgatherei16_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint64m1_t test_vrgatherei16_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgatherei16_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint64m2_t test_vrgatherei16_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgatherei16_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint64m4_t test_vrgatherei16_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgatherei16_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vuint64m8_t test_vrgatherei16_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vfloat16mf4_t test_vrgatherei16_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat16mf4_t test_vrgatherei16_vv_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16mf4_mu(vm, vd, vs2, vs1, vl); } -vfloat16mf2_t test_vrgatherei16_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat16mf2_t test_vrgatherei16_vv_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f16mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m1_t test_vrgatherei16_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat16m1_t test_vrgatherei16_vv_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m1_mu(vm, vd, vs2, vs1, vl); } -vfloat16m2_t test_vrgatherei16_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat16m2_t test_vrgatherei16_vv_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m2_mu(vm, vd, vs2, vs1, vl); } -vfloat16m4_t test_vrgatherei16_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat16m4_t test_vrgatherei16_vv_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m4_mu(vm, vd, vs2, vs1, vl); } -vfloat16m8_t test_vrgatherei16_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vfloat16m8_t test_vrgatherei16_vv_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f16m8_mu(vm, vd, vs2, vs1, vl); } -vfloat32mf2_t test_vrgatherei16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat32mf2_t test_vrgatherei16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_f32mf2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m1_t test_vrgatherei16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat32m1_t test_vrgatherei16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m1_mu(vm, vd, vs2, vs1, vl); } -vfloat32m2_t test_vrgatherei16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat32m2_t test_vrgatherei16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m2_mu(vm, vd, vs2, vs1, vl); } -vfloat32m4_t test_vrgatherei16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat32m4_t test_vrgatherei16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m4_mu(vm, vd, vs2, vs1, vl); } -vfloat32m8_t test_vrgatherei16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vfloat32m8_t test_vrgatherei16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f32m8_mu(vm, vd, vs2, vs1, vl); } -vfloat64m1_t test_vrgatherei16_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vfloat64m1_t test_vrgatherei16_vv_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m1_mu(vm, vd, vs2, vs1, vl); } -vfloat64m2_t test_vrgatherei16_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vfloat64m2_t test_vrgatherei16_vv_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m2_mu(vm, vd, vs2, vs1, vl); } -vfloat64m4_t test_vrgatherei16_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vfloat64m4_t test_vrgatherei16_vv_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m4_mu(vm, vd, vs2, vs1, vl); } -vfloat64m8_t test_vrgatherei16_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vfloat64m8_t test_vrgatherei16_vv_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_f64m8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vrgatherei16_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint16mf4_t vs1, size_t vl) { +vint8mf8_t test_vrgatherei16_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vrgatherei16_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint16mf2_t vs1, size_t vl) { +vint8mf4_t test_vrgatherei16_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vrgatherei16_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vint8mf2_t test_vrgatherei16_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vrgatherei16_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint16m2_t vs1, size_t vl) { +vint8m1_t test_vrgatherei16_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vrgatherei16_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint16m4_t vs1, size_t vl) { +vint8m2_t test_vrgatherei16_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vrgatherei16_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint16m8_t vs1, size_t vl) { +vint8m4_t test_vrgatherei16_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vrgatherei16_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vrgatherei16_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vrgatherei16_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vrgatherei16_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vrgatherei16_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vrgatherei16_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vrgatherei16_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vrgatherei16_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vrgatherei16_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vrgatherei16_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vrgatherei16_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vrgatherei16_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vrgatherei16_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vrgatherei16_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vrgatherei16_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vrgatherei16_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vint32m1_t test_vrgatherei16_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vrgatherei16_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vint32m2_t test_vrgatherei16_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vrgatherei16_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vint32m4_t test_vrgatherei16_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vrgatherei16_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vint32m8_t test_vrgatherei16_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vrgatherei16_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vint64m1_t test_vrgatherei16_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vrgatherei16_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint64m2_t test_vrgatherei16_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vrgatherei16_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vint64m4_t test_vrgatherei16_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vrgatherei16_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vint64m8_t test_vrgatherei16_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vrgatherei16_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint8mf8_t test_vrgatherei16_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vrgatherei16_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint8mf4_t test_vrgatherei16_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vrgatherei16_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint8mf2_t test_vrgatherei16_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vrgatherei16_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint16m2_t vs1, size_t vl) { +vuint8m1_t test_vrgatherei16_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vrgatherei16_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint16m4_t vs1, size_t vl) { +vuint8m2_t test_vrgatherei16_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vrgatherei16_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint16m8_t vs1, size_t vl) { +vuint8m4_t test_vrgatherei16_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vrgatherei16_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vrgatherei16_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vrgatherei16_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vrgatherei16_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vrgatherei16_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vrgatherei16_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vrgatherei16_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vrgatherei16_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vrgatherei16_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vrgatherei16_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vrgatherei16_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vrgatherei16_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vrgatherei16_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vrgatherei16_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vrgatherei16_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vrgatherei16_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vrgatherei16_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vrgatherei16_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vrgatherei16_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vrgatherei16_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vrgatherei16_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vrgatherei16_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vrgatherei16_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint64m1_t test_vrgatherei16_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vrgatherei16_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint64m2_t test_vrgatherei16_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vrgatherei16_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint64m4_t test_vrgatherei16_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vrgatherei16_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint16m2_t vs1, size_t vl) { +vuint64m8_t test_vrgatherei16_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vrgatherei16_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vrsub.c b/auto-generated/policy_funcs/llvm-api-tests/vrsub.c index fb5476516..1fc5ea4e5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vrsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vrsub.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vrsub_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vrsub_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrsub_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vrsub_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vrsub_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrsub_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vrsub_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vrsub_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrsub_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vrsub_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vrsub_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrsub_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vrsub_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vrsub_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrsub_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vrsub_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vrsub_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrsub_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vrsub_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vrsub_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vrsub_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vrsub_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vrsub_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vrsub_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vrsub_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vrsub_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vrsub_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrsub_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vrsub_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vrsub_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrsub_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vrsub_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vrsub_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrsub_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vrsub_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vrsub_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vrsub_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vrsub_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vrsub_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vrsub_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vrsub_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrsub_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vrsub_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vrsub_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrsub_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vrsub_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vrsub_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrsub_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vrsub_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vrsub_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vrsub_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vrsub_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vrsub_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vrsub_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vrsub_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vrsub_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vrsub_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vrsub_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vrsub_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vrsub_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vrsub_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vrsub_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vrsub_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vrsub_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vrsub_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vrsub_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vrsub_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vrsub_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vrsub_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vrsub_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vrsub_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vrsub_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vrsub_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vrsub_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vrsub_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vrsub_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vrsub_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vrsub_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vrsub_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vrsub_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vrsub_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vrsub_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vrsub_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vrsub_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vrsub_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vrsub_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vrsub_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vrsub_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vrsub_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vrsub_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vrsub_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vrsub_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vrsub_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vrsub_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vrsub_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vrsub_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vrsub_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vrsub_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vrsub_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vrsub_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vrsub_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vrsub_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vrsub_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vrsub_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vrsub_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vrsub_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vrsub_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vrsub_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vrsub_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vrsub_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vrsub_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vrsub_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vrsub_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vrsub_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vrsub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vrsub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vrsub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vrsub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vrsub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vrsub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vrsub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vrsub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vrsub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vrsub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vrsub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vrsub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vrsub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vrsub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vrsub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vrsub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vrsub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vrsub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vrsub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vrsub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vrsub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vrsub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vrsub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vrsub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vrsub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vrsub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vrsub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vrsub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vrsub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vrsub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vrsub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vrsub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vrsub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vrsub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vrsub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vrsub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vrsub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vrsub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vrsub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vrsub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vrsub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vrsub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vrsub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vrsub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrsub_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vrsub_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrsub_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vrsub_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrsub_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vrsub_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrsub_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vrsub_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrsub_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vrsub_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrsub_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vrsub_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrsub_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vrsub_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrsub_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vrsub_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vrsub_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrsub_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vrsub_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vrsub_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrsub_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vrsub_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrsub_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vrsub_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrsub_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vrsub_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrsub_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vrsub_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrsub_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vrsub_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vrsub_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrsub_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vrsub_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrsub_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vrsub_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrsub_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vrsub_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrsub_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vrsub_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrsub_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vrsub_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrsub_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vrsub_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrsub_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vrsub_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrsub_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vrsub_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vrsub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vrsub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vrsub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vrsub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vrsub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vrsub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vrsub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vrsub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vrsub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vrsub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vrsub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vrsub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vrsub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vrsub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vrsub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vrsub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vrsub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vrsub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vrsub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vrsub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vrsub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vrsub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vrsub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vrsub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vrsub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vrsub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vrsub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vrsub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vrsub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vrsub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vrsub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vrsub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vrsub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vrsub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vrsub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vrsub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vrsub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vrsub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vrsub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vrsub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vrsub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vrsub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vrsub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vrsub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrsub_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vrsub_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrsub_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vrsub_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrsub_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vrsub_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrsub_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vrsub_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrsub_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vrsub_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrsub_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vrsub_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrsub_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vrsub_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrsub_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vrsub_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vrsub_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrsub_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vrsub_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vrsub_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrsub_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vrsub_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrsub_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vrsub_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrsub_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vrsub_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrsub_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vrsub_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrsub_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vrsub_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vrsub_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrsub_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vrsub_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrsub_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vrsub_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrsub_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vrsub_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrsub_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vrsub_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrsub_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vrsub_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrsub_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vrsub_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrsub_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vrsub_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrsub_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vrsub_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vrsub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vrsub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vrsub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vrsub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vrsub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vrsub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vrsub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vrsub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vrsub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vrsub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vrsub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vrsub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vrsub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vrsub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vrsub_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vrsub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vrsub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vrsub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vrsub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vrsub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vrsub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vrsub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vrsub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vrsub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vrsub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vrsub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vrsub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vrsub_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vrsub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vrsub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vrsub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vrsub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vrsub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vrsub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vrsub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vrsub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vrsub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vrsub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vrsub_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vrsub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vrsub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vrsub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vrsub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vrsub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vrsub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vrsub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vrsub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vrsub_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vrsub_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vrsub_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vrsub_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vrsub_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vrsub_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vrsub_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vrsub_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vrsub_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vrsub_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vrsub_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vrsub_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vrsub_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vrsub_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vrsub_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vrsub_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vrsub_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vrsub_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vrsub_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vrsub_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vrsub_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vrsub_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vrsub_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vrsub_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vrsub_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vrsub_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vrsub_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vrsub_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vrsub_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vrsub_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vrsub_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vrsub_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vrsub_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vrsub_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vrsub_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vrsub_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vrsub_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vrsub_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vrsub_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vrsub_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vrsub_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vrsub_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vrsub_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vrsub_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vrsub_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vrsub_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vrsub_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vrsub_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vrsub_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vrsub_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vrsub_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vrsub_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsadd.c b/auto-generated/policy_funcs/llvm-api-tests/vsadd.c index 70424f5a2..2633927b0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsadd.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vsadd_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsadd_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vsadd_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vsadd_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsadd_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsadd_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vsadd_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsadd_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vsadd_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vsadd_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsadd_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsadd_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vsadd_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsadd_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vsadd_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsadd_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsadd_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vsadd_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsadd_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vsadd_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vsadd_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsadd_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsadd_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vsadd_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsadd_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vsadd_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsadd_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsadd_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vsadd_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsadd_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vsadd_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vsadd_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsadd_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsadd_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vsadd_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsadd_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vsadd_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vsadd_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsadd_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsadd_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vsadd_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsadd_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vsadd_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vsadd_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsadd_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vsadd_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsadd_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vsadd_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vsadd_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsadd_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vsadd_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsadd_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vsadd_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsadd_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsadd_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vsadd_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsadd_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vsadd_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsadd_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsadd_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vsadd_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsadd_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vsadd_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsadd_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsadd_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vsadd_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsadd_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vsadd_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsadd_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsadd_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vsadd_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsadd_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vsadd_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vsadd_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsadd_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vsadd_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsadd_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vsadd_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vsadd_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsadd_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsadd_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vsadd_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsadd_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vsadd_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsadd_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsadd_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vsadd_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsadd_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vsadd_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vsadd_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsadd_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsadd_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vsadd_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsadd_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vsadd_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vsadd_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsadd_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsadd_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vsadd_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsadd_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vsadd_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vsadd_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsadd_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsadd_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vsadd_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsadd_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vsadd_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsadd_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsadd_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vsadd_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsadd_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vsadd_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vsadd_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsadd_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsadd_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vsadd_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsadd_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vsadd_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vsadd_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsadd_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsadd_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vsadd_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsadd_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsadd_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsadd_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsadd_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsadd_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsadd_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsadd_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsadd_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsadd_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsadd_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsadd_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsadd_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsadd_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsadd_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsadd_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsadd_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsadd_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsadd_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsadd_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsadd_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsadd_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsadd_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsadd_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsadd_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsadd_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsadd_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsadd_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vsadd_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsadd_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsadd_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsadd_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsadd_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsadd_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsadd_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsadd_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsadd_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsadd_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsadd_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsadd_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsadd_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsadd_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsadd_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsadd_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsadd_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsadd_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsadd_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsadd_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsadd_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsadd_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsadd_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsadd_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsadd_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsadd_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsadd_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsadd_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vsadd_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsadd_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsadd_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsadd_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsadd_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsadd_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsadd_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsadd_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsadd_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsadd_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsadd_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsadd_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsadd_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsadd_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsadd_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsadd_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsadd_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsadd_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsadd_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsadd_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsadd_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsadd_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsadd_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsadd_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsadd_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsadd_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsadd_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsadd_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsadd_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsadd_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsadd_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsadd_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsadd_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsadd_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsadd_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsaddu.c b/auto-generated/policy_funcs/llvm-api-tests/vsaddu.c index 8caa28d1e..15da1dc08 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsaddu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsaddu.c @@ -5,706 +5,957 @@ #include -vuint8mf8_t test_vsaddu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsaddu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vsaddu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vsaddu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vsaddu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsaddu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vsaddu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vsaddu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vsaddu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsaddu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vsaddu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vsaddu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vsaddu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsaddu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vsaddu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vsaddu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vsaddu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsaddu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vsaddu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vsaddu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vsaddu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsaddu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vsaddu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vsaddu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vsaddu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsaddu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vsaddu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vsaddu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vsaddu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsaddu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vsaddu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vsaddu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vsaddu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vsaddu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsaddu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vsaddu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vsaddu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vsaddu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsaddu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsaddu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vsaddu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vsaddu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vsaddu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsaddu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vsaddu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vsaddu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vsaddu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsaddu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsaddu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vsaddu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vsaddu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vsaddu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsaddu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsaddu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vsaddu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vsaddu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vsaddu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsaddu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vsaddu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vsaddu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vsaddu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsaddu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsaddu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsaddu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vsaddu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vsaddu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsaddu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsaddu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vsaddu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vsaddu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsaddu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsaddu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsaddu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vsaddu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vsaddu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsaddu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsaddu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsaddu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vsaddu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vsaddu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsaddu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsaddu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsaddu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vsaddu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vsaddu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsaddu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsaddu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vsaddu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vsaddu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsaddu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsaddu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsaddu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vsaddu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vsaddu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsaddu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsaddu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsaddu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vsaddu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vsaddu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsaddu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsaddu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vsaddu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsaddu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsaddu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsaddu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vsaddu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsaddu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsaddu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsaddu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vsaddu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsaddu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsaddu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsaddu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vsaddu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsaddu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsaddu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsaddu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vsaddu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsaddu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsaddu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsaddu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vsaddu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsaddu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsaddu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsaddu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vsaddu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsaddu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsaddu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsaddu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vsaddu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsaddu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsaddu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsaddu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vsaddu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsaddu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsaddu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsaddu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vsaddu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsaddu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsaddu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsaddu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vsaddu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsaddu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsaddu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsaddu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vsaddu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsaddu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsaddu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsaddu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vsaddu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsaddu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsaddu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsaddu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vsaddu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsaddu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsaddu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsaddu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vsaddu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsaddu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsaddu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsaddu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vsaddu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsaddu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsaddu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsaddu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vsaddu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsaddu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsaddu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsaddu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vsaddu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsaddu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsaddu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsaddu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vsaddu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsaddu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsaddu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsaddu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vsaddu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsaddu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsaddu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsaddu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vsaddu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsaddu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsaddu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsaddu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vsaddu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsaddu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsaddu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsaddu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vsaddu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsaddu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsaddu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsaddu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vsaddu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsaddu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsaddu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsaddu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vsaddu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsaddu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsaddu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsaddu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vsaddu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsaddu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsaddu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsaddu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vsaddu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsaddu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsaddu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsaddu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vsaddu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsaddu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsaddu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsaddu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vsaddu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsaddu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsaddu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsaddu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vsaddu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsaddu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsaddu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsaddu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vsaddu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsaddu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsaddu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsaddu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vsaddu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsaddu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsaddu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsaddu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vsaddu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsaddu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsaddu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsaddu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vsaddu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsaddu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsaddu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsaddu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vsaddu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsaddu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsaddu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsaddu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vsaddu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsaddu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsaddu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsaddu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vsaddu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsaddu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsaddu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsaddu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vsaddu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsaddu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsaddu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsaddu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vsaddu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsaddu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsaddu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsaddu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vsaddu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsaddu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsaddu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsaddu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vsaddu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsaddu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsaddu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsaddu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vsaddu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsaddu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsaddu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsaddu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vsaddu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsaddu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsaddu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsaddu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vsaddu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsaddu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsaddu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsaddu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vsaddu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsaddu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsaddu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsaddu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vsaddu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsaddu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsaddu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsaddu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vsaddu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsaddu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsaddu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsaddu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vsaddu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsaddu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsaddu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsaddu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vsaddu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsaddu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsaddu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsaddu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vsaddu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsaddu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsaddu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsaddu_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsaddu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vsaddu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsaddu_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsaddu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsaddu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsaddu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vsaddu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsaddu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsaddu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsaddu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vsaddu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsaddu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsaddu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsaddu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vsaddu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsaddu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsaddu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsaddu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vsaddu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsaddu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsaddu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsaddu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vsaddu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsaddu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsaddu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsaddu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vsaddu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsaddu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsaddu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsaddu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsaddu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vsaddu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsaddu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsaddu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsaddu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsaddu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vsaddu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsaddu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsaddu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsaddu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vsaddu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsaddu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsaddu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsaddu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vsaddu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsaddu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsaddu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsaddu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vsaddu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsaddu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsaddu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsaddu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsaddu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vsaddu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsaddu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsaddu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsaddu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vsaddu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsaddu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsaddu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsaddu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vsaddu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsaddu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsaddu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vsaddu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsaddu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vsaddu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsaddu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsbc.c b/auto-generated/policy_funcs/llvm-api-tests/vsbc.c index 18048689e..a38608763 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsbc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsbc.c @@ -5,354 +5,445 @@ #include -vint8mf8_t test_vsbc_vvm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, vbool64_t v0, size_t vl) { +vint8mf8_t test_vsbc_vvm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + vbool64_t v0, size_t vl) { return __riscv_vsbc_vvm_i8mf8_tu(vd, vs2, vs1, v0, vl); } -vint8mf8_t test_vsbc_vxm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, vbool64_t v0, size_t vl) { +vint8mf8_t test_vsbc_vxm_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + vbool64_t v0, size_t vl) { return __riscv_vsbc_vxm_i8mf8_tu(vd, vs2, rs1, v0, vl); } -vint8mf4_t test_vsbc_vvm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, vbool32_t v0, size_t vl) { +vint8mf4_t test_vsbc_vvm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + vbool32_t v0, size_t vl) { return __riscv_vsbc_vvm_i8mf4_tu(vd, vs2, vs1, v0, vl); } -vint8mf4_t test_vsbc_vxm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, vbool32_t v0, size_t vl) { +vint8mf4_t test_vsbc_vxm_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vsbc_vxm_i8mf4_tu(vd, vs2, rs1, v0, vl); } -vint8mf2_t test_vsbc_vvm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, vbool16_t v0, size_t vl) { +vint8mf2_t test_vsbc_vvm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vvm_i8mf2_tu(vd, vs2, vs1, v0, vl); } -vint8mf2_t test_vsbc_vxm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, vbool16_t v0, size_t vl) { +vint8mf2_t test_vsbc_vxm_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vxm_i8mf2_tu(vd, vs2, rs1, v0, vl); } -vint8m1_t test_vsbc_vvm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, vbool8_t v0, size_t vl) { +vint8m1_t test_vsbc_vvm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vvm_i8m1_tu(vd, vs2, vs1, v0, vl); } -vint8m1_t test_vsbc_vxm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, vbool8_t v0, size_t vl) { +vint8m1_t test_vsbc_vxm_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vxm_i8m1_tu(vd, vs2, rs1, v0, vl); } -vint8m2_t test_vsbc_vvm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, vbool4_t v0, size_t vl) { +vint8m2_t test_vsbc_vvm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vsbc_vvm_i8m2_tu(vd, vs2, vs1, v0, vl); } -vint8m2_t test_vsbc_vxm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, vbool4_t v0, size_t vl) { +vint8m2_t test_vsbc_vxm_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vsbc_vxm_i8m2_tu(vd, vs2, rs1, v0, vl); } -vint8m4_t test_vsbc_vvm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, vbool2_t v0, size_t vl) { +vint8m4_t test_vsbc_vvm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + vbool2_t v0, size_t vl) { return __riscv_vsbc_vvm_i8m4_tu(vd, vs2, vs1, v0, vl); } -vint8m4_t test_vsbc_vxm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, vbool2_t v0, size_t vl) { +vint8m4_t test_vsbc_vxm_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vsbc_vxm_i8m4_tu(vd, vs2, rs1, v0, vl); } -vint8m8_t test_vsbc_vvm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, vbool1_t v0, size_t vl) { +vint8m8_t test_vsbc_vvm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + vbool1_t v0, size_t vl) { return __riscv_vsbc_vvm_i8m8_tu(vd, vs2, vs1, v0, vl); } -vint8m8_t test_vsbc_vxm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, vbool1_t v0, size_t vl) { +vint8m8_t test_vsbc_vxm_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + vbool1_t v0, size_t vl) { return __riscv_vsbc_vxm_i8m8_tu(vd, vs2, rs1, v0, vl); } -vint16mf4_t test_vsbc_vvm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, vbool64_t v0, size_t vl) { +vint16mf4_t test_vsbc_vvm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vvm_i16mf4_tu(vd, vs2, vs1, v0, vl); } -vint16mf4_t test_vsbc_vxm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, vbool64_t v0, size_t vl) { +vint16mf4_t test_vsbc_vxm_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vxm_i16mf4_tu(vd, vs2, rs1, v0, vl); } -vint16mf2_t test_vsbc_vvm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, vbool32_t v0, size_t vl) { +vint16mf2_t test_vsbc_vvm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, vbool32_t v0, size_t vl) { return __riscv_vsbc_vvm_i16mf2_tu(vd, vs2, vs1, v0, vl); } -vint16mf2_t test_vsbc_vxm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, vbool32_t v0, size_t vl) { +vint16mf2_t test_vsbc_vxm_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, vbool32_t v0, size_t vl) { return __riscv_vsbc_vxm_i16mf2_tu(vd, vs2, rs1, v0, vl); } -vint16m1_t test_vsbc_vvm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, vbool16_t v0, size_t vl) { +vint16m1_t test_vsbc_vvm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vvm_i16m1_tu(vd, vs2, vs1, v0, vl); } -vint16m1_t test_vsbc_vxm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, vbool16_t v0, size_t vl) { +vint16m1_t test_vsbc_vxm_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vxm_i16m1_tu(vd, vs2, rs1, v0, vl); } -vint16m2_t test_vsbc_vvm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, vbool8_t v0, size_t vl) { +vint16m2_t test_vsbc_vvm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vvm_i16m2_tu(vd, vs2, vs1, v0, vl); } -vint16m2_t test_vsbc_vxm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, vbool8_t v0, size_t vl) { +vint16m2_t test_vsbc_vxm_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vxm_i16m2_tu(vd, vs2, rs1, v0, vl); } -vint16m4_t test_vsbc_vvm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, vbool4_t v0, size_t vl) { +vint16m4_t test_vsbc_vvm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vsbc_vvm_i16m4_tu(vd, vs2, vs1, v0, vl); } -vint16m4_t test_vsbc_vxm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, vbool4_t v0, size_t vl) { +vint16m4_t test_vsbc_vxm_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vsbc_vxm_i16m4_tu(vd, vs2, rs1, v0, vl); } -vint16m8_t test_vsbc_vvm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, vbool2_t v0, size_t vl) { +vint16m8_t test_vsbc_vvm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + vbool2_t v0, size_t vl) { return __riscv_vsbc_vvm_i16m8_tu(vd, vs2, vs1, v0, vl); } -vint16m8_t test_vsbc_vxm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, vbool2_t v0, size_t vl) { +vint16m8_t test_vsbc_vxm_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vsbc_vxm_i16m8_tu(vd, vs2, rs1, v0, vl); } -vint32mf2_t test_vsbc_vvm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, vbool64_t v0, size_t vl) { +vint32mf2_t test_vsbc_vvm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vvm_i32mf2_tu(vd, vs2, vs1, v0, vl); } -vint32mf2_t test_vsbc_vxm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, vbool64_t v0, size_t vl) { +vint32mf2_t test_vsbc_vxm_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vxm_i32mf2_tu(vd, vs2, rs1, v0, vl); } -vint32m1_t test_vsbc_vvm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, vbool32_t v0, size_t vl) { +vint32m1_t test_vsbc_vvm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + vbool32_t v0, size_t vl) { return __riscv_vsbc_vvm_i32m1_tu(vd, vs2, vs1, v0, vl); } -vint32m1_t test_vsbc_vxm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, vbool32_t v0, size_t vl) { +vint32m1_t test_vsbc_vxm_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vsbc_vxm_i32m1_tu(vd, vs2, rs1, v0, vl); } -vint32m2_t test_vsbc_vvm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, vbool16_t v0, size_t vl) { +vint32m2_t test_vsbc_vvm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vvm_i32m2_tu(vd, vs2, vs1, v0, vl); } -vint32m2_t test_vsbc_vxm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, vbool16_t v0, size_t vl) { +vint32m2_t test_vsbc_vxm_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vxm_i32m2_tu(vd, vs2, rs1, v0, vl); } -vint32m4_t test_vsbc_vvm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, vbool8_t v0, size_t vl) { +vint32m4_t test_vsbc_vvm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vvm_i32m4_tu(vd, vs2, vs1, v0, vl); } -vint32m4_t test_vsbc_vxm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, vbool8_t v0, size_t vl) { +vint32m4_t test_vsbc_vxm_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vxm_i32m4_tu(vd, vs2, rs1, v0, vl); } -vint32m8_t test_vsbc_vvm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, vbool4_t v0, size_t vl) { +vint32m8_t test_vsbc_vvm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vsbc_vvm_i32m8_tu(vd, vs2, vs1, v0, vl); } -vint32m8_t test_vsbc_vxm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, vbool4_t v0, size_t vl) { +vint32m8_t test_vsbc_vxm_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vsbc_vxm_i32m8_tu(vd, vs2, rs1, v0, vl); } -vint64m1_t test_vsbc_vvm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, vbool64_t v0, size_t vl) { +vint64m1_t test_vsbc_vvm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + vbool64_t v0, size_t vl) { return __riscv_vsbc_vvm_i64m1_tu(vd, vs2, vs1, v0, vl); } -vint64m1_t test_vsbc_vxm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, vbool64_t v0, size_t vl) { +vint64m1_t test_vsbc_vxm_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + vbool64_t v0, size_t vl) { return __riscv_vsbc_vxm_i64m1_tu(vd, vs2, rs1, v0, vl); } -vint64m2_t test_vsbc_vvm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, vbool32_t v0, size_t vl) { +vint64m2_t test_vsbc_vvm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + vbool32_t v0, size_t vl) { return __riscv_vsbc_vvm_i64m2_tu(vd, vs2, vs1, v0, vl); } -vint64m2_t test_vsbc_vxm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, vbool32_t v0, size_t vl) { +vint64m2_t test_vsbc_vxm_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vsbc_vxm_i64m2_tu(vd, vs2, rs1, v0, vl); } -vint64m4_t test_vsbc_vvm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, vbool16_t v0, size_t vl) { +vint64m4_t test_vsbc_vvm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vvm_i64m4_tu(vd, vs2, vs1, v0, vl); } -vint64m4_t test_vsbc_vxm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, vbool16_t v0, size_t vl) { +vint64m4_t test_vsbc_vxm_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vxm_i64m4_tu(vd, vs2, rs1, v0, vl); } -vint64m8_t test_vsbc_vvm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, vbool8_t v0, size_t vl) { +vint64m8_t test_vsbc_vvm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vvm_i64m8_tu(vd, vs2, vs1, v0, vl); } -vint64m8_t test_vsbc_vxm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, vbool8_t v0, size_t vl) { +vint64m8_t test_vsbc_vxm_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vxm_i64m8_tu(vd, vs2, rs1, v0, vl); } -vuint8mf8_t test_vsbc_vvm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, vbool64_t v0, size_t vl) { +vuint8mf8_t test_vsbc_vvm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vvm_u8mf8_tu(vd, vs2, vs1, v0, vl); } -vuint8mf8_t test_vsbc_vxm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, vbool64_t v0, size_t vl) { +vuint8mf8_t test_vsbc_vxm_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + vbool64_t v0, size_t vl) { return __riscv_vsbc_vxm_u8mf8_tu(vd, vs2, rs1, v0, vl); } -vuint8mf4_t test_vsbc_vvm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, vbool32_t v0, size_t vl) { +vuint8mf4_t test_vsbc_vvm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, vbool32_t v0, size_t vl) { return __riscv_vsbc_vvm_u8mf4_tu(vd, vs2, vs1, v0, vl); } -vuint8mf4_t test_vsbc_vxm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, vbool32_t v0, size_t vl) { +vuint8mf4_t test_vsbc_vxm_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + vbool32_t v0, size_t vl) { return __riscv_vsbc_vxm_u8mf4_tu(vd, vs2, rs1, v0, vl); } -vuint8mf2_t test_vsbc_vvm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, vbool16_t v0, size_t vl) { +vuint8mf2_t test_vsbc_vvm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, vbool16_t v0, size_t vl) { return __riscv_vsbc_vvm_u8mf2_tu(vd, vs2, vs1, v0, vl); } -vuint8mf2_t test_vsbc_vxm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, vbool16_t v0, size_t vl) { +vuint8mf2_t test_vsbc_vxm_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + vbool16_t v0, size_t vl) { return __riscv_vsbc_vxm_u8mf2_tu(vd, vs2, rs1, v0, vl); } -vuint8m1_t test_vsbc_vvm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, vbool8_t v0, size_t vl) { +vuint8m1_t test_vsbc_vvm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vvm_u8m1_tu(vd, vs2, vs1, v0, vl); } -vuint8m1_t test_vsbc_vxm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, vbool8_t v0, size_t vl) { +vuint8m1_t test_vsbc_vxm_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + vbool8_t v0, size_t vl) { return __riscv_vsbc_vxm_u8m1_tu(vd, vs2, rs1, v0, vl); } -vuint8m2_t test_vsbc_vvm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, vbool4_t v0, size_t vl) { +vuint8m2_t test_vsbc_vvm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + vbool4_t v0, size_t vl) { return __riscv_vsbc_vvm_u8m2_tu(vd, vs2, vs1, v0, vl); } -vuint8m2_t test_vsbc_vxm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, vbool4_t v0, size_t vl) { +vuint8m2_t test_vsbc_vxm_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + vbool4_t v0, size_t vl) { return __riscv_vsbc_vxm_u8m2_tu(vd, vs2, rs1, v0, vl); } -vuint8m4_t test_vsbc_vvm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, vbool2_t v0, size_t vl) { +vuint8m4_t test_vsbc_vvm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + vbool2_t v0, size_t vl) { return __riscv_vsbc_vvm_u8m4_tu(vd, vs2, vs1, v0, vl); } -vuint8m4_t test_vsbc_vxm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, vbool2_t v0, size_t vl) { +vuint8m4_t test_vsbc_vxm_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + vbool2_t v0, size_t vl) { return __riscv_vsbc_vxm_u8m4_tu(vd, vs2, rs1, v0, vl); } -vuint8m8_t test_vsbc_vvm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, vbool1_t v0, size_t vl) { +vuint8m8_t test_vsbc_vvm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + vbool1_t v0, size_t vl) { return __riscv_vsbc_vvm_u8m8_tu(vd, vs2, vs1, v0, vl); } -vuint8m8_t test_vsbc_vxm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, vbool1_t v0, size_t vl) { +vuint8m8_t test_vsbc_vxm_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + vbool1_t v0, size_t vl) { return __riscv_vsbc_vxm_u8m8_tu(vd, vs2, rs1, v0, vl); } -vuint16mf4_t test_vsbc_vvm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, vbool64_t v0, size_t vl) { +vuint16mf4_t test_vsbc_vvm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vsbc_vvm_u16mf4_tu(vd, vs2, vs1, v0, vl); } -vuint16mf4_t test_vsbc_vxm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, vbool64_t v0, size_t vl) { +vuint16mf4_t test_vsbc_vxm_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vxm_u16mf4_tu(vd, vs2, rs1, v0, vl); } -vuint16mf2_t test_vsbc_vvm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, vbool32_t v0, size_t vl) { +vuint16mf2_t test_vsbc_vvm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, vbool32_t v0, + size_t vl) { return __riscv_vsbc_vvm_u16mf2_tu(vd, vs2, vs1, v0, vl); } -vuint16mf2_t test_vsbc_vxm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, vbool32_t v0, size_t vl) { +vuint16mf2_t test_vsbc_vxm_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, vbool32_t v0, size_t vl) { return __riscv_vsbc_vxm_u16mf2_tu(vd, vs2, rs1, v0, vl); } -vuint16m1_t test_vsbc_vvm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, vbool16_t v0, size_t vl) { +vuint16m1_t test_vsbc_vvm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, vbool16_t v0, size_t vl) { return __riscv_vsbc_vvm_u16m1_tu(vd, vs2, vs1, v0, vl); } -vuint16m1_t test_vsbc_vxm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, vbool16_t v0, size_t vl) { +vuint16m1_t test_vsbc_vxm_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, vbool16_t v0, size_t vl) { return __riscv_vsbc_vxm_u16m1_tu(vd, vs2, rs1, v0, vl); } -vuint16m2_t test_vsbc_vvm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, vbool8_t v0, size_t vl) { +vuint16m2_t test_vsbc_vvm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, vbool8_t v0, size_t vl) { return __riscv_vsbc_vvm_u16m2_tu(vd, vs2, vs1, v0, vl); } -vuint16m2_t test_vsbc_vxm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, vbool8_t v0, size_t vl) { +vuint16m2_t test_vsbc_vxm_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, vbool8_t v0, size_t vl) { return __riscv_vsbc_vxm_u16m2_tu(vd, vs2, rs1, v0, vl); } -vuint16m4_t test_vsbc_vvm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, vbool4_t v0, size_t vl) { +vuint16m4_t test_vsbc_vvm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, vbool4_t v0, size_t vl) { return __riscv_vsbc_vvm_u16m4_tu(vd, vs2, vs1, v0, vl); } -vuint16m4_t test_vsbc_vxm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, vbool4_t v0, size_t vl) { +vuint16m4_t test_vsbc_vxm_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, vbool4_t v0, size_t vl) { return __riscv_vsbc_vxm_u16m4_tu(vd, vs2, rs1, v0, vl); } -vuint16m8_t test_vsbc_vvm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, vbool2_t v0, size_t vl) { +vuint16m8_t test_vsbc_vvm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, vbool2_t v0, size_t vl) { return __riscv_vsbc_vvm_u16m8_tu(vd, vs2, vs1, v0, vl); } -vuint16m8_t test_vsbc_vxm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, vbool2_t v0, size_t vl) { +vuint16m8_t test_vsbc_vxm_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, vbool2_t v0, size_t vl) { return __riscv_vsbc_vxm_u16m8_tu(vd, vs2, rs1, v0, vl); } -vuint32mf2_t test_vsbc_vvm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, vbool64_t v0, size_t vl) { +vuint32mf2_t test_vsbc_vvm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vsbc_vvm_u32mf2_tu(vd, vs2, vs1, v0, vl); } -vuint32mf2_t test_vsbc_vxm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, vbool64_t v0, size_t vl) { +vuint32mf2_t test_vsbc_vxm_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vxm_u32mf2_tu(vd, vs2, rs1, v0, vl); } -vuint32m1_t test_vsbc_vvm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, vbool32_t v0, size_t vl) { +vuint32m1_t test_vsbc_vvm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, vbool32_t v0, size_t vl) { return __riscv_vsbc_vvm_u32m1_tu(vd, vs2, vs1, v0, vl); } -vuint32m1_t test_vsbc_vxm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, vbool32_t v0, size_t vl) { +vuint32m1_t test_vsbc_vxm_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, vbool32_t v0, size_t vl) { return __riscv_vsbc_vxm_u32m1_tu(vd, vs2, rs1, v0, vl); } -vuint32m2_t test_vsbc_vvm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, vbool16_t v0, size_t vl) { +vuint32m2_t test_vsbc_vvm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, vbool16_t v0, size_t vl) { return __riscv_vsbc_vvm_u32m2_tu(vd, vs2, vs1, v0, vl); } -vuint32m2_t test_vsbc_vxm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, vbool16_t v0, size_t vl) { +vuint32m2_t test_vsbc_vxm_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, vbool16_t v0, size_t vl) { return __riscv_vsbc_vxm_u32m2_tu(vd, vs2, rs1, v0, vl); } -vuint32m4_t test_vsbc_vvm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, vbool8_t v0, size_t vl) { +vuint32m4_t test_vsbc_vvm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, vbool8_t v0, size_t vl) { return __riscv_vsbc_vvm_u32m4_tu(vd, vs2, vs1, v0, vl); } -vuint32m4_t test_vsbc_vxm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, vbool8_t v0, size_t vl) { +vuint32m4_t test_vsbc_vxm_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, vbool8_t v0, size_t vl) { return __riscv_vsbc_vxm_u32m4_tu(vd, vs2, rs1, v0, vl); } -vuint32m8_t test_vsbc_vvm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, vbool4_t v0, size_t vl) { +vuint32m8_t test_vsbc_vvm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, vbool4_t v0, size_t vl) { return __riscv_vsbc_vvm_u32m8_tu(vd, vs2, vs1, v0, vl); } -vuint32m8_t test_vsbc_vxm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, vbool4_t v0, size_t vl) { +vuint32m8_t test_vsbc_vxm_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, vbool4_t v0, size_t vl) { return __riscv_vsbc_vxm_u32m8_tu(vd, vs2, rs1, v0, vl); } -vuint64m1_t test_vsbc_vvm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, vbool64_t v0, size_t vl) { +vuint64m1_t test_vsbc_vvm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vvm_u64m1_tu(vd, vs2, vs1, v0, vl); } -vuint64m1_t test_vsbc_vxm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, vbool64_t v0, size_t vl) { +vuint64m1_t test_vsbc_vxm_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, vbool64_t v0, size_t vl) { return __riscv_vsbc_vxm_u64m1_tu(vd, vs2, rs1, v0, vl); } -vuint64m2_t test_vsbc_vvm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, vbool32_t v0, size_t vl) { +vuint64m2_t test_vsbc_vvm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, vbool32_t v0, size_t vl) { return __riscv_vsbc_vvm_u64m2_tu(vd, vs2, vs1, v0, vl); } -vuint64m2_t test_vsbc_vxm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, vbool32_t v0, size_t vl) { +vuint64m2_t test_vsbc_vxm_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, vbool32_t v0, size_t vl) { return __riscv_vsbc_vxm_u64m2_tu(vd, vs2, rs1, v0, vl); } -vuint64m4_t test_vsbc_vvm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, vbool16_t v0, size_t vl) { +vuint64m4_t test_vsbc_vvm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, vbool16_t v0, size_t vl) { return __riscv_vsbc_vvm_u64m4_tu(vd, vs2, vs1, v0, vl); } -vuint64m4_t test_vsbc_vxm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, vbool16_t v0, size_t vl) { +vuint64m4_t test_vsbc_vxm_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, vbool16_t v0, size_t vl) { return __riscv_vsbc_vxm_u64m4_tu(vd, vs2, rs1, v0, vl); } -vuint64m8_t test_vsbc_vvm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, vbool8_t v0, size_t vl) { +vuint64m8_t test_vsbc_vvm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, vbool8_t v0, size_t vl) { return __riscv_vsbc_vvm_u64m8_tu(vd, vs2, vs1, v0, vl); } -vuint64m8_t test_vsbc_vxm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, vbool8_t v0, size_t vl) { +vuint64m8_t test_vsbc_vxm_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, vbool8_t v0, size_t vl) { return __riscv_vsbc_vxm_u64m8_tu(vd, vs2, rs1, v0, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsetvl.c b/auto-generated/policy_funcs/llvm-api-tests/vsetvl.c index 0e7b7dda7..994e9aa38 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsetvl.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsetvl.c @@ -5,90 +5,46 @@ #include -size_t test_vsetvl_e8mf8(size_t avl) { - return __riscv_vsetvl_e8mf8(avl); -} +size_t test_vsetvl_e8mf8(size_t avl) { return __riscv_vsetvl_e8mf8(avl); } -size_t test_vsetvl_e8mf4(size_t avl) { - return __riscv_vsetvl_e8mf4(avl); -} +size_t test_vsetvl_e8mf4(size_t avl) { return __riscv_vsetvl_e8mf4(avl); } -size_t test_vsetvl_e8mf2(size_t avl) { - return __riscv_vsetvl_e8mf2(avl); -} +size_t test_vsetvl_e8mf2(size_t avl) { return __riscv_vsetvl_e8mf2(avl); } -size_t test_vsetvl_e8m1(size_t avl) { - return __riscv_vsetvl_e8m1(avl); -} +size_t test_vsetvl_e8m1(size_t avl) { return __riscv_vsetvl_e8m1(avl); } -size_t test_vsetvl_e8m2(size_t avl) { - return __riscv_vsetvl_e8m2(avl); -} +size_t test_vsetvl_e8m2(size_t avl) { return __riscv_vsetvl_e8m2(avl); } -size_t test_vsetvl_e8m4(size_t avl) { - return __riscv_vsetvl_e8m4(avl); -} +size_t test_vsetvl_e8m4(size_t avl) { return __riscv_vsetvl_e8m4(avl); } -size_t test_vsetvl_e8m8(size_t avl) { - return __riscv_vsetvl_e8m8(avl); -} +size_t test_vsetvl_e8m8(size_t avl) { return __riscv_vsetvl_e8m8(avl); } -size_t test_vsetvl_e16mf4(size_t avl) { - return __riscv_vsetvl_e16mf4(avl); -} +size_t test_vsetvl_e16mf4(size_t avl) { return __riscv_vsetvl_e16mf4(avl); } -size_t test_vsetvl_e16mf2(size_t avl) { - return __riscv_vsetvl_e16mf2(avl); -} +size_t test_vsetvl_e16mf2(size_t avl) { return __riscv_vsetvl_e16mf2(avl); } -size_t test_vsetvl_e16m1(size_t avl) { - return __riscv_vsetvl_e16m1(avl); -} +size_t test_vsetvl_e16m1(size_t avl) { return __riscv_vsetvl_e16m1(avl); } -size_t test_vsetvl_e16m2(size_t avl) { - return __riscv_vsetvl_e16m2(avl); -} +size_t test_vsetvl_e16m2(size_t avl) { return __riscv_vsetvl_e16m2(avl); } -size_t test_vsetvl_e16m4(size_t avl) { - return __riscv_vsetvl_e16m4(avl); -} +size_t test_vsetvl_e16m4(size_t avl) { return __riscv_vsetvl_e16m4(avl); } -size_t test_vsetvl_e16m8(size_t avl) { - return __riscv_vsetvl_e16m8(avl); -} +size_t test_vsetvl_e16m8(size_t avl) { return __riscv_vsetvl_e16m8(avl); } -size_t test_vsetvl_e32mf2(size_t avl) { - return __riscv_vsetvl_e32mf2(avl); -} +size_t test_vsetvl_e32mf2(size_t avl) { return __riscv_vsetvl_e32mf2(avl); } -size_t test_vsetvl_e32m1(size_t avl) { - return __riscv_vsetvl_e32m1(avl); -} +size_t test_vsetvl_e32m1(size_t avl) { return __riscv_vsetvl_e32m1(avl); } -size_t test_vsetvl_e32m2(size_t avl) { - return __riscv_vsetvl_e32m2(avl); -} +size_t test_vsetvl_e32m2(size_t avl) { return __riscv_vsetvl_e32m2(avl); } -size_t test_vsetvl_e32m4(size_t avl) { - return __riscv_vsetvl_e32m4(avl); -} +size_t test_vsetvl_e32m4(size_t avl) { return __riscv_vsetvl_e32m4(avl); } -size_t test_vsetvl_e32m8(size_t avl) { - return __riscv_vsetvl_e32m8(avl); -} +size_t test_vsetvl_e32m8(size_t avl) { return __riscv_vsetvl_e32m8(avl); } -size_t test_vsetvl_e64m1(size_t avl) { - return __riscv_vsetvl_e64m1(avl); -} +size_t test_vsetvl_e64m1(size_t avl) { return __riscv_vsetvl_e64m1(avl); } -size_t test_vsetvl_e64m2(size_t avl) { - return __riscv_vsetvl_e64m2(avl); -} +size_t test_vsetvl_e64m2(size_t avl) { return __riscv_vsetvl_e64m2(avl); } -size_t test_vsetvl_e64m4(size_t avl) { - return __riscv_vsetvl_e64m4(avl); -} +size_t test_vsetvl_e64m4(size_t avl) { return __riscv_vsetvl_e64m4(avl); } -size_t test_vsetvl_e64m8(size_t avl) { - return __riscv_vsetvl_e64m8(avl); -} +size_t test_vsetvl_e64m8(size_t avl) { return __riscv_vsetvl_e64m8(avl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsetvlmax.c b/auto-generated/policy_funcs/llvm-api-tests/vsetvlmax.c index 77eb6af38..b5d48608b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsetvlmax.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsetvlmax.c @@ -5,90 +5,46 @@ #include -size_t test_vsetvlmax_e8mf8() { - return __riscv_vsetvlmax_e8mf8(); -} +size_t test_vsetvlmax_e8mf8() { return __riscv_vsetvlmax_e8mf8(); } -size_t test_vsetvlmax_e8mf4() { - return __riscv_vsetvlmax_e8mf4(); -} +size_t test_vsetvlmax_e8mf4() { return __riscv_vsetvlmax_e8mf4(); } -size_t test_vsetvlmax_e8mf2() { - return __riscv_vsetvlmax_e8mf2(); -} +size_t test_vsetvlmax_e8mf2() { return __riscv_vsetvlmax_e8mf2(); } -size_t test_vsetvlmax_e8m1() { - return __riscv_vsetvlmax_e8m1(); -} +size_t test_vsetvlmax_e8m1() { return __riscv_vsetvlmax_e8m1(); } -size_t test_vsetvlmax_e8m2() { - return __riscv_vsetvlmax_e8m2(); -} +size_t test_vsetvlmax_e8m2() { return __riscv_vsetvlmax_e8m2(); } -size_t test_vsetvlmax_e8m4() { - return __riscv_vsetvlmax_e8m4(); -} +size_t test_vsetvlmax_e8m4() { return __riscv_vsetvlmax_e8m4(); } -size_t test_vsetvlmax_e8m8() { - return __riscv_vsetvlmax_e8m8(); -} +size_t test_vsetvlmax_e8m8() { return __riscv_vsetvlmax_e8m8(); } -size_t test_vsetvlmax_e16mf4() { - return __riscv_vsetvlmax_e16mf4(); -} +size_t test_vsetvlmax_e16mf4() { return __riscv_vsetvlmax_e16mf4(); } -size_t test_vsetvlmax_e16mf2() { - return __riscv_vsetvlmax_e16mf2(); -} +size_t test_vsetvlmax_e16mf2() { return __riscv_vsetvlmax_e16mf2(); } -size_t test_vsetvlmax_e16m1() { - return __riscv_vsetvlmax_e16m1(); -} +size_t test_vsetvlmax_e16m1() { return __riscv_vsetvlmax_e16m1(); } -size_t test_vsetvlmax_e16m2() { - return __riscv_vsetvlmax_e16m2(); -} +size_t test_vsetvlmax_e16m2() { return __riscv_vsetvlmax_e16m2(); } -size_t test_vsetvlmax_e16m4() { - return __riscv_vsetvlmax_e16m4(); -} +size_t test_vsetvlmax_e16m4() { return __riscv_vsetvlmax_e16m4(); } -size_t test_vsetvlmax_e16m8() { - return __riscv_vsetvlmax_e16m8(); -} +size_t test_vsetvlmax_e16m8() { return __riscv_vsetvlmax_e16m8(); } -size_t test_vsetvlmax_e32mf2() { - return __riscv_vsetvlmax_e32mf2(); -} +size_t test_vsetvlmax_e32mf2() { return __riscv_vsetvlmax_e32mf2(); } -size_t test_vsetvlmax_e32m1() { - return __riscv_vsetvlmax_e32m1(); -} +size_t test_vsetvlmax_e32m1() { return __riscv_vsetvlmax_e32m1(); } -size_t test_vsetvlmax_e32m2() { - return __riscv_vsetvlmax_e32m2(); -} +size_t test_vsetvlmax_e32m2() { return __riscv_vsetvlmax_e32m2(); } -size_t test_vsetvlmax_e32m4() { - return __riscv_vsetvlmax_e32m4(); -} +size_t test_vsetvlmax_e32m4() { return __riscv_vsetvlmax_e32m4(); } -size_t test_vsetvlmax_e32m8() { - return __riscv_vsetvlmax_e32m8(); -} +size_t test_vsetvlmax_e32m8() { return __riscv_vsetvlmax_e32m8(); } -size_t test_vsetvlmax_e64m1() { - return __riscv_vsetvlmax_e64m1(); -} +size_t test_vsetvlmax_e64m1() { return __riscv_vsetvlmax_e64m1(); } -size_t test_vsetvlmax_e64m2() { - return __riscv_vsetvlmax_e64m2(); -} +size_t test_vsetvlmax_e64m2() { return __riscv_vsetvlmax_e64m2(); } -size_t test_vsetvlmax_e64m4() { - return __riscv_vsetvlmax_e64m4(); -} +size_t test_vsetvlmax_e64m4() { return __riscv_vsetvlmax_e64m4(); } -size_t test_vsetvlmax_e64m8() { - return __riscv_vsetvlmax_e64m8(); -} +size_t test_vsetvlmax_e64m8() { return __riscv_vsetvlmax_e64m8(); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsext_vf2.c b/auto-generated/policy_funcs/llvm-api-tests/vsext_vf2.c index 9d5fa2acb..2877bd461 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsext_vf2.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsext_vf2.c @@ -5,11 +5,13 @@ #include -vint16mf4_t test_vsext_vf2_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vsext_vf2_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16mf4_tu(vd, vs2, vl); } -vint16mf2_t test_vsext_vf2_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vsext_vf2_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16mf2_tu(vd, vs2, vl); } @@ -29,7 +31,8 @@ vint16m8_t test_vsext_vf2_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, size_t vl) { return __riscv_vsext_vf2_i16m8_tu(vd, vs2, vl); } -vint32mf2_t test_vsext_vf2_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vsext_vf2_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32mf2_tu(vd, vs2, vl); } @@ -65,182 +68,227 @@ vint64m8_t test_vsext_vf2_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, size_t vl) { return __riscv_vsext_vf2_i64m8_tu(vd, vs2, vl); } -vint16mf4_t test_vsext_vf2_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vsext_vf2_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vsext_vf2_i16mf4_tum(vm, vd, vs2, vl); } -vint16mf2_t test_vsext_vf2_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vsext_vf2_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vsext_vf2_i16mf2_tum(vm, vd, vs2, vl); } -vint16m1_t test_vsext_vf2_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vsext_vf2_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m1_tum(vm, vd, vs2, vl); } -vint16m2_t test_vsext_vf2_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vsext_vf2_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m2_tum(vm, vd, vs2, vl); } -vint16m4_t test_vsext_vf2_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vsext_vf2_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m4_tum(vm, vd, vs2, vl); } -vint16m8_t test_vsext_vf2_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vsext_vf2_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m8_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vsext_vf2_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vsext_vf2_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vsext_vf2_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vsext_vf2_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vsext_vf2_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vsext_vf2_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vsext_vf2_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vsext_vf2_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vsext_vf2_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vsext_vf2_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m4_tum(vm, vd, vs2, vl); } -vint32m8_t test_vsext_vf2_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vsext_vf2_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m8_tum(vm, vd, vs2, vl); } -vint64m1_t test_vsext_vf2_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vsext_vf2_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vsext_vf2_i64m1_tum(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf2_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vsext_vf2_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + size_t vl) { return __riscv_vsext_vf2_i64m2_tum(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf2_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vsext_vf2_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i64m4_tum(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf2_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vsext_vf2_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i64m8_tum(vm, vd, vs2, vl); } -vint16mf4_t test_vsext_vf2_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vsext_vf2_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vsext_vf2_i16mf4_tumu(vm, vd, vs2, vl); } -vint16mf2_t test_vsext_vf2_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vsext_vf2_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vsext_vf2_i16mf2_tumu(vm, vd, vs2, vl); } -vint16m1_t test_vsext_vf2_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vsext_vf2_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vsext_vf2_i16m1_tumu(vm, vd, vs2, vl); } -vint16m2_t test_vsext_vf2_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vsext_vf2_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m2_tumu(vm, vd, vs2, vl); } -vint16m4_t test_vsext_vf2_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vsext_vf2_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m4_tumu(vm, vd, vs2, vl); } -vint16m8_t test_vsext_vf2_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vsext_vf2_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m8_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vsext_vf2_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vsext_vf2_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vsext_vf2_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vsext_vf2_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vsext_vf2_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vsext_vf2_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vsext_vf2_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vsext_vf2_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vsext_vf2_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vsext_vf2_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vsext_vf2_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m4_tumu(vm, vd, vs2, vl); } -vint32m8_t test_vsext_vf2_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vsext_vf2_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m8_tumu(vm, vd, vs2, vl); } -vint64m1_t test_vsext_vf2_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vsext_vf2_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vsext_vf2_i64m1_tumu(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf2_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vsext_vf2_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vsext_vf2_i64m2_tumu(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf2_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vsext_vf2_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vsext_vf2_i64m4_tumu(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf2_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vsext_vf2_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i64m8_tumu(vm, vd, vs2, vl); } -vint16mf4_t test_vsext_vf2_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vsext_vf2_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vsext_vf2_i16mf4_mu(vm, vd, vs2, vl); } -vint16mf2_t test_vsext_vf2_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vsext_vf2_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vsext_vf2_i16mf2_mu(vm, vd, vs2, vl); } -vint16m1_t test_vsext_vf2_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vsext_vf2_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m1_mu(vm, vd, vs2, vl); } -vint16m2_t test_vsext_vf2_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vsext_vf2_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m2_mu(vm, vd, vs2, vl); } -vint16m4_t test_vsext_vf2_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vsext_vf2_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m4_mu(vm, vd, vs2, vl); } -vint16m8_t test_vsext_vf2_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vsext_vf2_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i16m8_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vsext_vf2_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vsext_vf2_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vsext_vf2_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vsext_vf2_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vsext_vf2_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vsext_vf2_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vsext_vf2_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vsext_vf2_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vsext_vf2_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m4_mu(vm, vd, vs2, vl); } -vint32m8_t test_vsext_vf2_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vsext_vf2_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i32m8_mu(vm, vd, vs2, vl); } -vint64m1_t test_vsext_vf2_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vsext_vf2_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i64m1_mu(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf2_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vsext_vf2_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + size_t vl) { return __riscv_vsext_vf2_i64m2_mu(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf2_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vsext_vf2_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + size_t vl) { return __riscv_vsext_vf2_i64m4_mu(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf2_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vsext_vf2_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vsext_vf2_i64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsext_vf4.c b/auto-generated/policy_funcs/llvm-api-tests/vsext_vf4.c index 129c5d1b7..0162db44a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsext_vf4.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsext_vf4.c @@ -5,7 +5,8 @@ #include -vint32mf2_t test_vsext_vf4_i32mf2_tu(vint32mf2_t vd, vint8mf8_t vs2, size_t vl) { +vint32mf2_t test_vsext_vf4_i32mf2_tu(vint32mf2_t vd, vint8mf8_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32mf2_tu(vd, vs2, vl); } @@ -41,110 +42,137 @@ vint64m8_t test_vsext_vf4_i64m8_tu(vint64m8_t vd, vint16m2_t vs2, size_t vl) { return __riscv_vsext_vf4_i64m8_tu(vd, vs2, vl); } -vint32mf2_t test_vsext_vf4_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint8mf8_t vs2, size_t vl) { +vint32mf2_t test_vsext_vf4_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vsext_vf4_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vsext_vf4_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint8mf4_t vs2, size_t vl) { +vint32m1_t test_vsext_vf4_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint8mf4_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vsext_vf4_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint8mf2_t vs2, size_t vl) { +vint32m2_t test_vsext_vf4_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint8mf2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vsext_vf4_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint8m1_t vs2, size_t vl) { +vint32m4_t test_vsext_vf4_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m4_tum(vm, vd, vs2, vl); } -vint32m8_t test_vsext_vf4_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint8m2_t vs2, size_t vl) { +vint32m8_t test_vsext_vf4_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m8_tum(vm, vd, vs2, vl); } -vint64m1_t test_vsext_vf4_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint16mf4_t vs2, size_t vl) { +vint64m1_t test_vsext_vf4_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vsext_vf4_i64m1_tum(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf4_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint16mf2_t vs2, size_t vl) { +vint64m2_t test_vsext_vf4_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vsext_vf4_i64m2_tum(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf4_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint16m1_t vs2, size_t vl) { +vint64m4_t test_vsext_vf4_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint16m1_t vs2, + size_t vl) { return __riscv_vsext_vf4_i64m4_tum(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf4_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint16m2_t vs2, size_t vl) { +vint64m8_t test_vsext_vf4_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i64m8_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vsext_vf4_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint8mf8_t vs2, size_t vl) { +vint32mf2_t test_vsext_vf4_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vsext_vf4_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vsext_vf4_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint8mf4_t vs2, size_t vl) { +vint32m1_t test_vsext_vf4_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vsext_vf4_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vsext_vf4_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint8mf2_t vs2, size_t vl) { +vint32m2_t test_vsext_vf4_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vsext_vf4_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vsext_vf4_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint8m1_t vs2, size_t vl) { +vint32m4_t test_vsext_vf4_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m4_tumu(vm, vd, vs2, vl); } -vint32m8_t test_vsext_vf4_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint8m2_t vs2, size_t vl) { +vint32m8_t test_vsext_vf4_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m8_tumu(vm, vd, vs2, vl); } -vint64m1_t test_vsext_vf4_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint16mf4_t vs2, size_t vl) { +vint64m1_t test_vsext_vf4_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vsext_vf4_i64m1_tumu(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf4_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint16mf2_t vs2, size_t vl) { +vint64m2_t test_vsext_vf4_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vsext_vf4_i64m2_tumu(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf4_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint16m1_t vs2, size_t vl) { +vint64m4_t test_vsext_vf4_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vsext_vf4_i64m4_tumu(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf4_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint16m2_t vs2, size_t vl) { +vint64m8_t test_vsext_vf4_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i64m8_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vsext_vf4_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint8mf8_t vs2, size_t vl) { +vint32mf2_t test_vsext_vf4_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vsext_vf4_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vsext_vf4_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint8mf4_t vs2, size_t vl) { +vint32m1_t test_vsext_vf4_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint8mf4_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vsext_vf4_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint8mf2_t vs2, size_t vl) { +vint32m2_t test_vsext_vf4_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint8mf2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vsext_vf4_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint8m1_t vs2, size_t vl) { +vint32m4_t test_vsext_vf4_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m4_mu(vm, vd, vs2, vl); } -vint32m8_t test_vsext_vf4_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint8m2_t vs2, size_t vl) { +vint32m8_t test_vsext_vf4_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i32m8_mu(vm, vd, vs2, vl); } -vint64m1_t test_vsext_vf4_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint16mf4_t vs2, size_t vl) { +vint64m1_t test_vsext_vf4_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint16mf4_t vs2, + size_t vl) { return __riscv_vsext_vf4_i64m1_mu(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf4_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint16mf2_t vs2, size_t vl) { +vint64m2_t test_vsext_vf4_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint16mf2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i64m2_mu(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf4_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint16m1_t vs2, size_t vl) { +vint64m4_t test_vsext_vf4_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint16m1_t vs2, + size_t vl) { return __riscv_vsext_vf4_i64m4_mu(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf4_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint16m2_t vs2, size_t vl) { +vint64m8_t test_vsext_vf4_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vsext_vf4_i64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsext_vf8.c b/auto-generated/policy_funcs/llvm-api-tests/vsext_vf8.c index b4e0f6159..b82c7a0fb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsext_vf8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsext_vf8.c @@ -21,50 +21,62 @@ vint64m8_t test_vsext_vf8_i64m8_tu(vint64m8_t vd, vint8m1_t vs2, size_t vl) { return __riscv_vsext_vf8_i64m8_tu(vd, vs2, vl); } -vint64m1_t test_vsext_vf8_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint8mf8_t vs2, size_t vl) { +vint64m1_t test_vsext_vf8_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint8mf8_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m1_tum(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf8_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint8mf4_t vs2, size_t vl) { +vint64m2_t test_vsext_vf8_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint8mf4_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m2_tum(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf8_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint8mf2_t vs2, size_t vl) { +vint64m4_t test_vsext_vf8_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint8mf2_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m4_tum(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf8_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint8m1_t vs2, size_t vl) { +vint64m8_t test_vsext_vf8_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m8_tum(vm, vd, vs2, vl); } -vint64m1_t test_vsext_vf8_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint8mf8_t vs2, size_t vl) { +vint64m1_t test_vsext_vf8_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vsext_vf8_i64m1_tumu(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf8_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint8mf4_t vs2, size_t vl) { +vint64m2_t test_vsext_vf8_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vsext_vf8_i64m2_tumu(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf8_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint8mf2_t vs2, size_t vl) { +vint64m4_t test_vsext_vf8_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vsext_vf8_i64m4_tumu(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf8_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint8m1_t vs2, size_t vl) { +vint64m8_t test_vsext_vf8_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m8_tumu(vm, vd, vs2, vl); } -vint64m1_t test_vsext_vf8_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint8mf8_t vs2, size_t vl) { +vint64m1_t test_vsext_vf8_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint8mf8_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m1_mu(vm, vd, vs2, vl); } -vint64m2_t test_vsext_vf8_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint8mf4_t vs2, size_t vl) { +vint64m2_t test_vsext_vf8_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint8mf4_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m2_mu(vm, vd, vs2, vl); } -vint64m4_t test_vsext_vf8_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint8mf2_t vs2, size_t vl) { +vint64m4_t test_vsext_vf8_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint8mf2_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m4_mu(vm, vd, vs2, vl); } -vint64m8_t test_vsext_vf8_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint8m1_t vs2, size_t vl) { +vint64m8_t test_vsext_vf8_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vsext_vf8_i64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vslide1down.c b/auto-generated/policy_funcs/llvm-api-tests/vslide1down.c index 4cf7f288c..eb7accebf 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vslide1down.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vslide1down.c @@ -5,706 +5,995 @@ #include -vint8mf8_t test_vslide1down_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vslide1down_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vslide1down_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vslide1down_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vslide1down_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vslide1down_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vslide1down_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vslide1down_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vslide1down_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vslide1down_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vslide1down_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vslide1down_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vslide1down_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vslide1down_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vslide1down_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vslide1down_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1down_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vslide1down_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vslide1down_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1down_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vslide1down_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vslide1down_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1down_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vslide1down_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vslide1down_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1down_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vslide1down_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vslide1down_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1down_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vslide1down_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vslide1down_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1down_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vslide1down_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vslide1down_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1down_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vslide1down_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vslide1down_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1down_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vslide1down_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vslide1down_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1down_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vslide1down_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vslide1down_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1down_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vslide1down_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vslide1down_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1down_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vslide1down_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vslide1down_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vslide1down_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vslide1down_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vslide1down_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vslide1down_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vslide1down_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vslide1down_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vslide1down_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vslide1down_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vslide1down_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vslide1down_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vslide1down_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vslide1down_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vslide1down_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vslide1down_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vslide1down_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vslide1down_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vslide1down_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vslide1down_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vslide1down_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vslide1down_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vslide1down_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vslide1down_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vslide1down_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vslide1down_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vslide1down_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vslide1down_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1down_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vslide1down_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vslide1down_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1down_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vslide1down_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vslide1down_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1down_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vslide1down_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vslide1down_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1down_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vslide1down_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vslide1down_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1down_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vslide1down_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vslide1down_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1down_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vslide1down_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vslide1down_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1down_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vslide1down_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vslide1down_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1down_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vslide1down_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vslide1down_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1down_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vslide1down_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vslide1down_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1down_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vslide1down_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vslide1down_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1down_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vslide1down_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vslide1down_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vslide1down_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vslide1down_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vslide1down_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vslide1down_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vslide1down_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vslide1down_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vslide1down_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vslide1down_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vslide1down_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vslide1down_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vslide1down_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vslide1down_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslide1down_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vslide1down_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslide1down_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vslide1down_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslide1down_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vslide1down_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslide1down_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vslide1down_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslide1down_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vslide1down_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslide1down_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vslide1down_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslide1down_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vslide1down_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslide1down_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vslide1down_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslide1down_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vslide1down_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslide1down_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vslide1down_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslide1down_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vslide1down_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslide1down_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vslide1down_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslide1down_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vslide1down_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslide1down_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vslide1down_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslide1down_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vslide1down_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslide1down_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vslide1down_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslide1down_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vslide1down_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslide1down_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vslide1down_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslide1down_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vslide1down_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslide1down_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vslide1down_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslide1down_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vslide1down_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslide1down_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vslide1down_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslide1down_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vslide1down_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslide1down_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vslide1down_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslide1down_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vslide1down_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslide1down_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vslide1down_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslide1down_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vslide1down_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslide1down_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vslide1down_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslide1down_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vslide1down_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslide1down_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vslide1down_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslide1down_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vslide1down_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslide1down_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vslide1down_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslide1down_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vslide1down_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslide1down_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vslide1down_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslide1down_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vslide1down_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslide1down_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vslide1down_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslide1down_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vslide1down_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslide1down_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vslide1down_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslide1down_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vslide1down_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslide1down_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vslide1down_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslide1down_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vslide1down_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslide1down_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vslide1down_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslide1down_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vslide1down_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslide1down_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vslide1down_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslide1down_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vslide1down_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslide1down_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vslide1down_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslide1down_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vslide1down_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, + vint8m1_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslide1down_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vslide1down_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, + vint8m2_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslide1down_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vslide1down_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, + vint8m4_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslide1down_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vslide1down_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, + vint8m8_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslide1down_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vslide1down_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslide1down_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vslide1down_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslide1down_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vslide1down_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslide1down_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vslide1down_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslide1down_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vslide1down_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslide1down_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vslide1down_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslide1down_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vslide1down_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslide1down_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vslide1down_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslide1down_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vslide1down_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslide1down_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vslide1down_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslide1down_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vslide1down_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslide1down_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vslide1down_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslide1down_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vslide1down_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslide1down_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vslide1down_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslide1down_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vslide1down_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslide1down_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vslide1down_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslide1down_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vslide1down_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslide1down_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vslide1down_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslide1down_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vslide1down_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslide1down_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vslide1down_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslide1down_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vslide1down_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslide1down_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vslide1down_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslide1down_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vslide1down_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslide1down_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vslide1down_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslide1down_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vslide1down_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslide1down_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vslide1down_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslide1down_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vslide1down_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslide1down_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vslide1down_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslide1down_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vslide1down_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslide1down_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vslide1down_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslide1down_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vslide1down_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslide1down_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vslide1down_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslide1down_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vslide1down_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslide1down_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vslide1down_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslide1down_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vslide1down_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslide1down_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vslide1down_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslide1down_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vslide1down_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslide1down_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vslide1down_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslide1down_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vslide1down_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslide1down_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vslide1down_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslide1down_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vslide1down_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslide1down_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vslide1down_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslide1down_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vslide1down_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslide1down_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vslide1down_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1down_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslide1down_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vslide1down_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslide1down_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vslide1down_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslide1down_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vslide1down_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslide1down_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vslide1down_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslide1down_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vslide1down_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslide1down_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vslide1down_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslide1down_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vslide1down_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslide1down_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vslide1down_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslide1down_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vslide1down_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslide1down_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vslide1down_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslide1down_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vslide1down_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslide1down_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vslide1down_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslide1down_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vslide1down_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslide1down_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vslide1down_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslide1down_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vslide1down_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslide1down_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vslide1down_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslide1down_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vslide1down_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslide1down_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vslide1down_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslide1down_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vslide1down_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslide1down_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vslide1down_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslide1down_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vslide1down_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslide1down_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vslide1down_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1down_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslide1down_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vslide1down_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslide1down_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vslide1down_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslide1down_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vslide1down_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslide1down_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vslide1down_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslide1down_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vslide1down_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslide1down_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vslide1down_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslide1down_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vslide1down_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslide1down_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vslide1down_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslide1down_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vslide1down_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslide1down_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vslide1down_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslide1down_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vslide1down_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslide1down_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vslide1down_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslide1down_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vslide1down_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslide1down_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vslide1down_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslide1down_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vslide1down_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1down_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vslide1up.c b/auto-generated/policy_funcs/llvm-api-tests/vslide1up.c index 3535fe3a1..3bd7e4de9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vslide1up.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vslide1up.c @@ -5,706 +5,957 @@ #include -vint8mf8_t test_vslide1up_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vslide1up_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vslide1up_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vslide1up_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vslide1up_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vslide1up_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vslide1up_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vslide1up_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vslide1up_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vslide1up_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vslide1up_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vslide1up_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vslide1up_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vslide1up_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vslide1up_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vslide1up_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vslide1up_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vslide1up_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vslide1up_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vslide1up_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vslide1up_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vslide1up_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vslide1up_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vslide1up_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vslide1up_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vslide1up_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vslide1up_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vslide1up_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vslide1up_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vslide1up_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vslide1up_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vslide1up_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vslide1up_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vslide1up_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vslide1up_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vslide1up_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vslide1up_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vslide1up_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vslide1up_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vslide1up_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vslide1up_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vslide1up_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vslide1up_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vslide1up_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vslide1up_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vslide1up_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vslide1up_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vslide1up_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vslide1up_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vslide1up_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vslide1up_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vslide1up_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vslide1up_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vslide1up_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vslide1up_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vslide1up_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vslide1up_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vslide1up_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vslide1up_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vslide1up_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1up_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vslide1up_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vslide1up_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1up_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vslide1up_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vslide1up_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1up_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vslide1up_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vslide1up_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1up_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vslide1up_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vslide1up_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1up_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vslide1up_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vslide1up_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vslide1up_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vslide1up_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vslide1up_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1up_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vslide1up_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vslide1up_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1up_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vslide1up_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vslide1up_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1up_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vslide1up_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vslide1up_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1up_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vslide1up_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vslide1up_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vslide1up_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vslide1up_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vslide1up_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vslide1up_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vslide1up_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vslide1up_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vslide1up_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vslide1up_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vslide1up_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vslide1up_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vslide1up_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vslide1up_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vslide1up_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vslide1up_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vslide1up_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslide1up_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vslide1up_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslide1up_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vslide1up_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslide1up_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vslide1up_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslide1up_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vslide1up_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslide1up_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vslide1up_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslide1up_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vslide1up_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslide1up_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vslide1up_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslide1up_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vslide1up_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslide1up_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vslide1up_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslide1up_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vslide1up_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslide1up_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vslide1up_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslide1up_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vslide1up_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslide1up_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vslide1up_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslide1up_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vslide1up_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslide1up_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vslide1up_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslide1up_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vslide1up_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslide1up_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vslide1up_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslide1up_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vslide1up_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslide1up_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vslide1up_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslide1up_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vslide1up_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslide1up_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vslide1up_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslide1up_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vslide1up_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslide1up_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vslide1up_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslide1up_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vslide1up_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslide1up_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vslide1up_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslide1up_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vslide1up_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslide1up_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vslide1up_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslide1up_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vslide1up_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslide1up_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vslide1up_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslide1up_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vslide1up_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslide1up_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vslide1up_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslide1up_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vslide1up_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslide1up_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vslide1up_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslide1up_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vslide1up_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslide1up_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vslide1up_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslide1up_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vslide1up_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslide1up_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vslide1up_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslide1up_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vslide1up_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslide1up_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vslide1up_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslide1up_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vslide1up_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslide1up_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vslide1up_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslide1up_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vslide1up_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslide1up_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vslide1up_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslide1up_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vslide1up_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslide1up_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vslide1up_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslide1up_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vslide1up_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslide1up_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vslide1up_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslide1up_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vslide1up_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslide1up_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vslide1up_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslide1up_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vslide1up_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslide1up_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vslide1up_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslide1up_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vslide1up_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslide1up_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vslide1up_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslide1up_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vslide1up_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslide1up_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vslide1up_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslide1up_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vslide1up_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslide1up_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vslide1up_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslide1up_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vslide1up_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslide1up_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vslide1up_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslide1up_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vslide1up_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslide1up_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vslide1up_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslide1up_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vslide1up_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslide1up_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vslide1up_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslide1up_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vslide1up_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslide1up_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vslide1up_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslide1up_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vslide1up_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslide1up_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vslide1up_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslide1up_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vslide1up_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslide1up_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vslide1up_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslide1up_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vslide1up_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslide1up_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vslide1up_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslide1up_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vslide1up_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslide1up_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vslide1up_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslide1up_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vslide1up_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslide1up_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vslide1up_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslide1up_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vslide1up_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslide1up_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vslide1up_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslide1up_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vslide1up_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslide1up_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vslide1up_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslide1up_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vslide1up_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslide1up_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vslide1up_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslide1up_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vslide1up_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslide1up_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vslide1up_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslide1up_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vslide1up_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslide1up_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vslide1up_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslide1up_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vslide1up_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslide1up_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vslide1up_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslide1up_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vslide1up_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslide1up_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vslide1up_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslide1up_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vslide1up_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslide1up_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vslide1up_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslide1up_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vslide1up_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslide1up_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vslide1up_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslide1up_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vslide1up_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vslide1up_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslide1up_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vslide1up_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslide1up_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vslide1up_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslide1up_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vslide1up_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslide1up_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vslide1up_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslide1up_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vslide1up_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslide1up_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vslide1up_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, int16_t rs1, size_t vl) { return __riscv_vslide1up_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslide1up_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vslide1up_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslide1up_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vslide1up_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslide1up_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vslide1up_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslide1up_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vslide1up_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslide1up_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vslide1up_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, int32_t rs1, size_t vl) { return __riscv_vslide1up_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslide1up_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vslide1up_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslide1up_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vslide1up_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslide1up_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vslide1up_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslide1up_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vslide1up_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, int64_t rs1, size_t vl) { return __riscv_vslide1up_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslide1up_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vslide1up_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslide1up_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vslide1up_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslide1up_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vslide1up_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslide1up_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vslide1up_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslide1up_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vslide1up_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslide1up_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vslide1up_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslide1up_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vslide1up_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vslide1up_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslide1up_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vslide1up_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslide1up_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vslide1up_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslide1up_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vslide1up_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslide1up_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vslide1up_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslide1up_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vslide1up_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslide1up_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vslide1up_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslide1up_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vslide1up_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslide1up_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vslide1up_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslide1up_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vslide1up_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslide1up_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vslide1up_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslide1up_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vslide1up_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslide1up_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vslide1up_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslide1up_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vslide1up_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslide1up_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vslide1up_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslide1up_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vslide1up_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vslide1up_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vslidedown.c b/auto-generated/policy_funcs/llvm-api-tests/vslidedown.c index 34f836cb6..223ab188e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vslidedown.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vslidedown.c @@ -6,946 +6,1305 @@ #include -vfloat16mf4_t test_vslidedown_vx_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t rs1, size_t vl) { +vfloat16mf4_t test_vslidedown_vx_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vslidedown_vx_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t rs1, size_t vl) { +vfloat16mf2_t test_vslidedown_vx_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vslidedown_vx_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t rs1, size_t vl) { +vfloat16m1_t test_vslidedown_vx_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vslidedown_vx_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t rs1, size_t vl) { +vfloat16m2_t test_vslidedown_vx_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vslidedown_vx_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t rs1, size_t vl) { +vfloat16m4_t test_vslidedown_vx_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vslidedown_vx_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t rs1, size_t vl) { +vfloat16m8_t test_vslidedown_vx_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vslidedown_vx_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t rs1, size_t vl) { +vfloat32mf2_t test_vslidedown_vx_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vslidedown_vx_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t rs1, size_t vl) { +vfloat32m1_t test_vslidedown_vx_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vslidedown_vx_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t rs1, size_t vl) { +vfloat32m2_t test_vslidedown_vx_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vslidedown_vx_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t rs1, size_t vl) { +vfloat32m4_t test_vslidedown_vx_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vslidedown_vx_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t rs1, size_t vl) { +vfloat32m8_t test_vslidedown_vx_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vslidedown_vx_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t rs1, size_t vl) { +vfloat64m1_t test_vslidedown_vx_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vslidedown_vx_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t rs1, size_t vl) { +vfloat64m2_t test_vslidedown_vx_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vslidedown_vx_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t rs1, size_t vl) { +vfloat64m4_t test_vslidedown_vx_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vslidedown_vx_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t rs1, size_t vl) { +vfloat64m8_t test_vslidedown_vx_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_f64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vslidedown_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vslidedown_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vslidedown_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vslidedown_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vslidedown_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vslidedown_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vslidedown_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vslidedown_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vslidedown_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vslidedown_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vslidedown_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vslidedown_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vslidedown_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vslidedown_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vslidedown_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vslidedown_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vslidedown_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vslidedown_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vslidedown_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vslidedown_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vslidedown_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vslidedown_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vslidedown_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vslidedown_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vslidedown_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vslidedown_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vslidedown_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vslidedown_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vslidedown_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vslidedown_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vslidedown_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vslidedown_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vslidedown_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vslidedown_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vslidedown_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vslidedown_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vslidedown_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vslidedown_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vslidedown_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vslidedown_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vslidedown_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vslidedown_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vslidedown_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vslidedown_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vslidedown_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vslidedown_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vslidedown_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vslidedown_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vslidedown_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vslidedown_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vslidedown_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vslidedown_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vslidedown_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vslidedown_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vslidedown_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vslidedown_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vslidedown_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vslidedown_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vslidedown_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vslidedown_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vslidedown_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vslidedown_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vslidedown_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vslidedown_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vslidedown_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vslidedown_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vslidedown_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vslidedown_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vslidedown_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vslidedown_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vslidedown_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vslidedown_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vslidedown_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vslidedown_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vslidedown_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vslidedown_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vslidedown_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vslidedown_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vslidedown_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vslidedown_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vslidedown_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vslidedown_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vslidedown_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vslidedown_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vslidedown_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vslidedown_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vslidedown_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vslidedown_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vslidedown_vx_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t rs1, size_t vl) { +vfloat16mf4_t test_vslidedown_vx_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vslidedown_vx_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t rs1, size_t vl) { +vfloat16mf2_t test_vslidedown_vx_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vslidedown_vx_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t rs1, size_t vl) { +vfloat16m1_t test_vslidedown_vx_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vslidedown_vx_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t rs1, size_t vl) { +vfloat16m2_t test_vslidedown_vx_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vslidedown_vx_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t rs1, size_t vl) { +vfloat16m4_t test_vslidedown_vx_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vslidedown_vx_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t rs1, size_t vl) { +vfloat16m8_t test_vslidedown_vx_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vslidedown_vx_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t rs1, size_t vl) { +vfloat32mf2_t test_vslidedown_vx_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vslidedown_vx_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t rs1, size_t vl) { +vfloat32m1_t test_vslidedown_vx_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vslidedown_vx_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t rs1, size_t vl) { +vfloat32m2_t test_vslidedown_vx_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vslidedown_vx_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t rs1, size_t vl) { +vfloat32m4_t test_vslidedown_vx_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vslidedown_vx_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t rs1, size_t vl) { +vfloat32m8_t test_vslidedown_vx_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vslidedown_vx_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t rs1, size_t vl) { +vfloat64m1_t test_vslidedown_vx_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vslidedown_vx_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t rs1, size_t vl) { +vfloat64m2_t test_vslidedown_vx_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vslidedown_vx_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t rs1, size_t vl) { +vfloat64m4_t test_vslidedown_vx_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vslidedown_vx_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t rs1, size_t vl) { +vfloat64m8_t test_vslidedown_vx_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslidedown_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vslidedown_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslidedown_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vslidedown_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslidedown_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vslidedown_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslidedown_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vslidedown_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslidedown_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vslidedown_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslidedown_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vslidedown_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslidedown_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vslidedown_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslidedown_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vslidedown_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslidedown_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vslidedown_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslidedown_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vslidedown_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslidedown_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vslidedown_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslidedown_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vslidedown_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslidedown_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vslidedown_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslidedown_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vslidedown_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslidedown_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vslidedown_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslidedown_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vslidedown_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslidedown_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vslidedown_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslidedown_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vslidedown_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslidedown_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vslidedown_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslidedown_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vslidedown_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslidedown_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vslidedown_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslidedown_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vslidedown_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslidedown_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vslidedown_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslidedown_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vslidedown_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslidedown_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vslidedown_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslidedown_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vslidedown_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslidedown_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vslidedown_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslidedown_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vslidedown_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslidedown_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vslidedown_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslidedown_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vslidedown_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslidedown_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vslidedown_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslidedown_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vslidedown_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslidedown_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vslidedown_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslidedown_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vslidedown_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslidedown_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vslidedown_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslidedown_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vslidedown_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslidedown_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vslidedown_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslidedown_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vslidedown_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslidedown_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vslidedown_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslidedown_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vslidedown_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslidedown_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vslidedown_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslidedown_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vslidedown_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslidedown_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vslidedown_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslidedown_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vslidedown_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vslidedown_vx_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t rs1, size_t vl) { +vfloat16mf4_t test_vslidedown_vx_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vslidedown_vx_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t rs1, size_t vl) { +vfloat16mf2_t test_vslidedown_vx_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vslidedown_vx_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t rs1, size_t vl) { +vfloat16m1_t test_vslidedown_vx_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vslidedown_vx_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t rs1, size_t vl) { +vfloat16m2_t test_vslidedown_vx_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vslidedown_vx_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t rs1, size_t vl) { +vfloat16m4_t test_vslidedown_vx_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vslidedown_vx_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t rs1, size_t vl) { +vfloat16m8_t test_vslidedown_vx_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vslidedown_vx_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t rs1, size_t vl) { +vfloat32mf2_t test_vslidedown_vx_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vslidedown_vx_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t rs1, size_t vl) { +vfloat32m1_t test_vslidedown_vx_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vslidedown_vx_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t rs1, size_t vl) { +vfloat32m2_t test_vslidedown_vx_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vslidedown_vx_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t rs1, size_t vl) { +vfloat32m4_t test_vslidedown_vx_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vslidedown_vx_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t rs1, size_t vl) { +vfloat32m8_t test_vslidedown_vx_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vslidedown_vx_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t rs1, size_t vl) { +vfloat64m1_t test_vslidedown_vx_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vslidedown_vx_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t rs1, size_t vl) { +vfloat64m2_t test_vslidedown_vx_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vslidedown_vx_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t rs1, size_t vl) { +vfloat64m4_t test_vslidedown_vx_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vslidedown_vx_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t rs1, size_t vl) { +vfloat64m8_t test_vslidedown_vx_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslidedown_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vslidedown_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslidedown_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vslidedown_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslidedown_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vslidedown_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslidedown_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vslidedown_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslidedown_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vslidedown_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslidedown_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vslidedown_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslidedown_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vslidedown_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslidedown_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vslidedown_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslidedown_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vslidedown_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslidedown_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vslidedown_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslidedown_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vslidedown_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslidedown_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vslidedown_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslidedown_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vslidedown_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslidedown_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vslidedown_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslidedown_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vslidedown_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslidedown_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vslidedown_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslidedown_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vslidedown_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslidedown_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vslidedown_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslidedown_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vslidedown_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslidedown_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vslidedown_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslidedown_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vslidedown_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslidedown_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vslidedown_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslidedown_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vslidedown_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslidedown_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vslidedown_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslidedown_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vslidedown_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslidedown_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vslidedown_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslidedown_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vslidedown_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslidedown_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vslidedown_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslidedown_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vslidedown_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslidedown_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vslidedown_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslidedown_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vslidedown_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslidedown_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vslidedown_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslidedown_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vslidedown_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslidedown_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vslidedown_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslidedown_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vslidedown_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslidedown_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vslidedown_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslidedown_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vslidedown_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslidedown_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vslidedown_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslidedown_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vslidedown_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslidedown_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vslidedown_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslidedown_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vslidedown_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslidedown_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vslidedown_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslidedown_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vslidedown_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslidedown_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vslidedown_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vslidedown_vx_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t rs1, size_t vl) { +vfloat16mf4_t test_vslidedown_vx_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vslidedown_vx_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t rs1, size_t vl) { +vfloat16mf2_t test_vslidedown_vx_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vslidedown_vx_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t rs1, size_t vl) { +vfloat16m1_t test_vslidedown_vx_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vslidedown_vx_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t rs1, size_t vl) { +vfloat16m2_t test_vslidedown_vx_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vslidedown_vx_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t rs1, size_t vl) { +vfloat16m4_t test_vslidedown_vx_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vslidedown_vx_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t rs1, size_t vl) { +vfloat16m8_t test_vslidedown_vx_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vslidedown_vx_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t rs1, size_t vl) { +vfloat32mf2_t test_vslidedown_vx_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vslidedown_vx_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t rs1, size_t vl) { +vfloat32m1_t test_vslidedown_vx_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vslidedown_vx_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t rs1, size_t vl) { +vfloat32m2_t test_vslidedown_vx_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vslidedown_vx_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t rs1, size_t vl) { +vfloat32m4_t test_vslidedown_vx_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vslidedown_vx_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t rs1, size_t vl) { +vfloat32m8_t test_vslidedown_vx_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vslidedown_vx_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t rs1, size_t vl) { +vfloat64m1_t test_vslidedown_vx_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vslidedown_vx_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t rs1, size_t vl) { +vfloat64m2_t test_vslidedown_vx_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vslidedown_vx_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t rs1, size_t vl) { +vfloat64m4_t test_vslidedown_vx_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vslidedown_vx_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t rs1, size_t vl) { +vfloat64m8_t test_vslidedown_vx_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_f64m8_mu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslidedown_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vslidedown_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslidedown_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vslidedown_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslidedown_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vslidedown_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslidedown_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vslidedown_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslidedown_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vslidedown_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslidedown_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vslidedown_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslidedown_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vslidedown_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslidedown_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vslidedown_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslidedown_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vslidedown_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslidedown_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vslidedown_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslidedown_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vslidedown_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslidedown_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vslidedown_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslidedown_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vslidedown_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslidedown_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vslidedown_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslidedown_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vslidedown_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslidedown_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vslidedown_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslidedown_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vslidedown_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslidedown_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vslidedown_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslidedown_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vslidedown_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslidedown_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vslidedown_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslidedown_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vslidedown_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslidedown_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vslidedown_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslidedown_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vslidedown_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslidedown_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vslidedown_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslidedown_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vslidedown_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslidedown_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vslidedown_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslidedown_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vslidedown_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslidedown_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vslidedown_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslidedown_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vslidedown_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslidedown_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslidedown_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vslidedown_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslidedown_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vslidedown_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslidedown_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vslidedown_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslidedown_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vslidedown_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslidedown_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vslidedown_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslidedown_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vslidedown_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslidedown_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vslidedown_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslidedown_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vslidedown_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslidedown_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vslidedown_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslidedown_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vslidedown_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslidedown_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vslidedown_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslidedown_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vslidedown_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslidedown_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vslidedown_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslidedown_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vslidedown_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslidedown_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vslidedown_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslidedown_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vslideup.c b/auto-generated/policy_funcs/llvm-api-tests/vslideup.c index 0cb8ee641..ae9f874e9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vslideup.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vslideup.c @@ -6,946 +6,1257 @@ #include -vfloat16mf4_t test_vslideup_vx_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t rs1, size_t vl) { +vfloat16mf4_t test_vslideup_vx_f16mf4_tu(vfloat16mf4_t vd, vfloat16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f16mf4_tu(vd, vs2, rs1, vl); } -vfloat16mf2_t test_vslideup_vx_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t rs1, size_t vl) { +vfloat16mf2_t test_vslideup_vx_f16mf2_tu(vfloat16mf2_t vd, vfloat16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f16mf2_tu(vd, vs2, rs1, vl); } -vfloat16m1_t test_vslideup_vx_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, size_t rs1, size_t vl) { +vfloat16m1_t test_vslideup_vx_f16m1_tu(vfloat16m1_t vd, vfloat16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f16m1_tu(vd, vs2, rs1, vl); } -vfloat16m2_t test_vslideup_vx_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, size_t rs1, size_t vl) { +vfloat16m2_t test_vslideup_vx_f16m2_tu(vfloat16m2_t vd, vfloat16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f16m2_tu(vd, vs2, rs1, vl); } -vfloat16m4_t test_vslideup_vx_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, size_t rs1, size_t vl) { +vfloat16m4_t test_vslideup_vx_f16m4_tu(vfloat16m4_t vd, vfloat16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f16m4_tu(vd, vs2, rs1, vl); } -vfloat16m8_t test_vslideup_vx_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, size_t rs1, size_t vl) { +vfloat16m8_t test_vslideup_vx_f16m8_tu(vfloat16m8_t vd, vfloat16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f16m8_tu(vd, vs2, rs1, vl); } -vfloat32mf2_t test_vslideup_vx_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t rs1, size_t vl) { +vfloat32mf2_t test_vslideup_vx_f32mf2_tu(vfloat32mf2_t vd, vfloat32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f32mf2_tu(vd, vs2, rs1, vl); } -vfloat32m1_t test_vslideup_vx_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, size_t rs1, size_t vl) { +vfloat32m1_t test_vslideup_vx_f32m1_tu(vfloat32m1_t vd, vfloat32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f32m1_tu(vd, vs2, rs1, vl); } -vfloat32m2_t test_vslideup_vx_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, size_t rs1, size_t vl) { +vfloat32m2_t test_vslideup_vx_f32m2_tu(vfloat32m2_t vd, vfloat32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f32m2_tu(vd, vs2, rs1, vl); } -vfloat32m4_t test_vslideup_vx_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, size_t rs1, size_t vl) { +vfloat32m4_t test_vslideup_vx_f32m4_tu(vfloat32m4_t vd, vfloat32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f32m4_tu(vd, vs2, rs1, vl); } -vfloat32m8_t test_vslideup_vx_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, size_t rs1, size_t vl) { +vfloat32m8_t test_vslideup_vx_f32m8_tu(vfloat32m8_t vd, vfloat32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f32m8_tu(vd, vs2, rs1, vl); } -vfloat64m1_t test_vslideup_vx_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, size_t rs1, size_t vl) { +vfloat64m1_t test_vslideup_vx_f64m1_tu(vfloat64m1_t vd, vfloat64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f64m1_tu(vd, vs2, rs1, vl); } -vfloat64m2_t test_vslideup_vx_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, size_t rs1, size_t vl) { +vfloat64m2_t test_vslideup_vx_f64m2_tu(vfloat64m2_t vd, vfloat64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f64m2_tu(vd, vs2, rs1, vl); } -vfloat64m4_t test_vslideup_vx_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, size_t rs1, size_t vl) { +vfloat64m4_t test_vslideup_vx_f64m4_tu(vfloat64m4_t vd, vfloat64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f64m4_tu(vd, vs2, rs1, vl); } -vfloat64m8_t test_vslideup_vx_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, size_t rs1, size_t vl) { +vfloat64m8_t test_vslideup_vx_f64m8_tu(vfloat64m8_t vd, vfloat64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_f64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vslideup_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vslideup_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vslideup_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vslideup_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vslideup_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vslideup_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vslideup_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vslideup_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vslideup_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vslideup_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vslideup_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vslideup_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vslideup_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vslideup_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vslideup_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vslideup_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vslideup_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vslideup_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vslideup_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vslideup_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vslideup_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vslideup_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vslideup_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vslideup_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vslideup_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vslideup_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vslideup_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vslideup_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vslideup_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vslideup_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vslideup_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vslideup_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vslideup_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vslideup_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vslideup_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vslideup_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vslideup_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vslideup_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vslideup_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vslideup_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vslideup_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vslideup_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vslideup_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vslideup_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vslideup_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vslideup_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vslideup_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vslideup_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vslideup_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vslideup_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vslideup_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vslideup_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vslideup_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vslideup_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vslideup_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vslideup_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vslideup_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vslideup_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vslideup_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vslideup_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vslideup_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vslideup_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vslideup_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vslideup_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vslideup_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vslideup_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vslideup_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vslideup_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vslideup_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vslideup_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vslideup_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vslideup_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vslideup_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vslideup_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vslideup_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vslideup_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vslideup_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vslideup_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vslideup_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vslideup_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vslideup_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vslideup_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vslideup_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vslideup_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vslideup_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vslideup_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vslideup_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vslideup_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m8_tu(vd, vs2, rs1, vl); } -vfloat16mf4_t test_vslideup_vx_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t rs1, size_t vl) { +vfloat16mf4_t test_vslideup_vx_f16mf4_tum(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16mf4_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vslideup_vx_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t rs1, size_t vl) { +vfloat16mf2_t test_vslideup_vx_f16mf2_tum(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vslideup_vx_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t rs1, size_t vl) { +vfloat16m1_t test_vslideup_vx_f16m1_tum(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m1_tum(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vslideup_vx_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t rs1, size_t vl) { +vfloat16m2_t test_vslideup_vx_f16m2_tum(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m2_tum(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vslideup_vx_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t rs1, size_t vl) { +vfloat16m4_t test_vslideup_vx_f16m4_tum(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m4_tum(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vslideup_vx_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t rs1, size_t vl) { +vfloat16m8_t test_vslideup_vx_f16m8_tum(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m8_tum(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vslideup_vx_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t rs1, size_t vl) { +vfloat32mf2_t test_vslideup_vx_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32mf2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vslideup_vx_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t rs1, size_t vl) { +vfloat32m1_t test_vslideup_vx_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m1_tum(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vslideup_vx_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t rs1, size_t vl) { +vfloat32m2_t test_vslideup_vx_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m2_tum(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vslideup_vx_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t rs1, size_t vl) { +vfloat32m4_t test_vslideup_vx_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m4_tum(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vslideup_vx_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t rs1, size_t vl) { +vfloat32m8_t test_vslideup_vx_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m8_tum(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vslideup_vx_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t rs1, size_t vl) { +vfloat64m1_t test_vslideup_vx_f64m1_tum(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m1_tum(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vslideup_vx_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t rs1, size_t vl) { +vfloat64m2_t test_vslideup_vx_f64m2_tum(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m2_tum(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vslideup_vx_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t rs1, size_t vl) { +vfloat64m4_t test_vslideup_vx_f64m4_tum(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m4_tum(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vslideup_vx_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t rs1, size_t vl) { +vfloat64m8_t test_vslideup_vx_f64m8_tum(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslideup_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vslideup_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslideup_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vslideup_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslideup_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vslideup_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslideup_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vslideup_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslideup_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vslideup_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslideup_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vslideup_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslideup_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vslideup_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslideup_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vslideup_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslideup_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vslideup_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslideup_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vslideup_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslideup_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vslideup_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslideup_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vslideup_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslideup_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vslideup_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslideup_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vslideup_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslideup_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vslideup_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslideup_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vslideup_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslideup_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vslideup_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslideup_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vslideup_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslideup_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vslideup_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslideup_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vslideup_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslideup_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vslideup_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslideup_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vslideup_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslideup_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vslideup_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslideup_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vslideup_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslideup_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vslideup_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslideup_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vslideup_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslideup_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vslideup_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslideup_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vslideup_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslideup_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vslideup_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslideup_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vslideup_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslideup_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vslideup_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslideup_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vslideup_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslideup_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vslideup_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslideup_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vslideup_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslideup_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vslideup_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslideup_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vslideup_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslideup_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vslideup_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslideup_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vslideup_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslideup_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vslideup_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslideup_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vslideup_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslideup_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vslideup_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslideup_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vslideup_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslideup_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vslideup_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslideup_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vslideup_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vslideup_vx_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t rs1, size_t vl) { +vfloat16mf4_t test_vslideup_vx_f16mf4_tumu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16mf4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vslideup_vx_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t rs1, size_t vl) { +vfloat16mf2_t test_vslideup_vx_f16mf2_tumu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vslideup_vx_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t rs1, size_t vl) { +vfloat16m1_t test_vslideup_vx_f16m1_tumu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vslideup_vx_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t rs1, size_t vl) { +vfloat16m2_t test_vslideup_vx_f16m2_tumu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vslideup_vx_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t rs1, size_t vl) { +vfloat16m4_t test_vslideup_vx_f16m4_tumu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vslideup_vx_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t rs1, size_t vl) { +vfloat16m8_t test_vslideup_vx_f16m8_tumu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vslideup_vx_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t rs1, size_t vl) { +vfloat32mf2_t test_vslideup_vx_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32mf2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vslideup_vx_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t rs1, size_t vl) { +vfloat32m1_t test_vslideup_vx_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vslideup_vx_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t rs1, size_t vl) { +vfloat32m2_t test_vslideup_vx_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vslideup_vx_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t rs1, size_t vl) { +vfloat32m4_t test_vslideup_vx_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vslideup_vx_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t rs1, size_t vl) { +vfloat32m8_t test_vslideup_vx_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vslideup_vx_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t rs1, size_t vl) { +vfloat64m1_t test_vslideup_vx_f64m1_tumu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m1_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vslideup_vx_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t rs1, size_t vl) { +vfloat64m2_t test_vslideup_vx_f64m2_tumu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m2_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vslideup_vx_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t rs1, size_t vl) { +vfloat64m4_t test_vslideup_vx_f64m4_tumu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m4_tumu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vslideup_vx_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t rs1, size_t vl) { +vfloat64m8_t test_vslideup_vx_f64m8_tumu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslideup_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vslideup_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslideup_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vslideup_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslideup_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vslideup_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslideup_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vslideup_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslideup_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vslideup_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslideup_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vslideup_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslideup_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vslideup_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslideup_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vslideup_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslideup_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vslideup_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslideup_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vslideup_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslideup_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vslideup_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslideup_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vslideup_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslideup_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vslideup_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslideup_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vslideup_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslideup_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vslideup_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslideup_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vslideup_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslideup_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vslideup_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslideup_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vslideup_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslideup_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vslideup_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslideup_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vslideup_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslideup_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vslideup_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslideup_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vslideup_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslideup_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vslideup_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslideup_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vslideup_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslideup_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vslideup_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslideup_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vslideup_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, + vuint8m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslideup_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vslideup_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, + vuint8m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslideup_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vslideup_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, + vuint8m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslideup_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vslideup_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, + vuint8m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslideup_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vslideup_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslideup_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vslideup_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslideup_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vslideup_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslideup_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vslideup_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslideup_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vslideup_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslideup_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vslideup_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslideup_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vslideup_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslideup_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vslideup_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslideup_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vslideup_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslideup_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vslideup_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslideup_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vslideup_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslideup_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vslideup_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslideup_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vslideup_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslideup_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vslideup_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslideup_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vslideup_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vfloat16mf4_t test_vslideup_vx_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, vfloat16mf4_t vs2, size_t rs1, size_t vl) { +vfloat16mf4_t test_vslideup_vx_f16mf4_mu(vbool64_t vm, vfloat16mf4_t vd, + vfloat16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16mf4_mu(vm, vd, vs2, rs1, vl); } -vfloat16mf2_t test_vslideup_vx_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, vfloat16mf2_t vs2, size_t rs1, size_t vl) { +vfloat16mf2_t test_vslideup_vx_f16mf2_mu(vbool32_t vm, vfloat16mf2_t vd, + vfloat16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m1_t test_vslideup_vx_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, vfloat16m1_t vs2, size_t rs1, size_t vl) { +vfloat16m1_t test_vslideup_vx_f16m1_mu(vbool16_t vm, vfloat16m1_t vd, + vfloat16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m1_mu(vm, vd, vs2, rs1, vl); } -vfloat16m2_t test_vslideup_vx_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, vfloat16m2_t vs2, size_t rs1, size_t vl) { +vfloat16m2_t test_vslideup_vx_f16m2_mu(vbool8_t vm, vfloat16m2_t vd, + vfloat16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m2_mu(vm, vd, vs2, rs1, vl); } -vfloat16m4_t test_vslideup_vx_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, vfloat16m4_t vs2, size_t rs1, size_t vl) { +vfloat16m4_t test_vslideup_vx_f16m4_mu(vbool4_t vm, vfloat16m4_t vd, + vfloat16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m4_mu(vm, vd, vs2, rs1, vl); } -vfloat16m8_t test_vslideup_vx_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, vfloat16m8_t vs2, size_t rs1, size_t vl) { +vfloat16m8_t test_vslideup_vx_f16m8_mu(vbool2_t vm, vfloat16m8_t vd, + vfloat16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f16m8_mu(vm, vd, vs2, rs1, vl); } -vfloat32mf2_t test_vslideup_vx_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vfloat32mf2_t vs2, size_t rs1, size_t vl) { +vfloat32mf2_t test_vslideup_vx_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vfloat32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32mf2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m1_t test_vslideup_vx_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vfloat32m1_t vs2, size_t rs1, size_t vl) { +vfloat32m1_t test_vslideup_vx_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vfloat32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m1_mu(vm, vd, vs2, rs1, vl); } -vfloat32m2_t test_vslideup_vx_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vfloat32m2_t vs2, size_t rs1, size_t vl) { +vfloat32m2_t test_vslideup_vx_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vfloat32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m2_mu(vm, vd, vs2, rs1, vl); } -vfloat32m4_t test_vslideup_vx_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vfloat32m4_t vs2, size_t rs1, size_t vl) { +vfloat32m4_t test_vslideup_vx_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vfloat32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m4_mu(vm, vd, vs2, rs1, vl); } -vfloat32m8_t test_vslideup_vx_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vfloat32m8_t vs2, size_t rs1, size_t vl) { +vfloat32m8_t test_vslideup_vx_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vfloat32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f32m8_mu(vm, vd, vs2, rs1, vl); } -vfloat64m1_t test_vslideup_vx_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, vfloat64m1_t vs2, size_t rs1, size_t vl) { +vfloat64m1_t test_vslideup_vx_f64m1_mu(vbool64_t vm, vfloat64m1_t vd, + vfloat64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m1_mu(vm, vd, vs2, rs1, vl); } -vfloat64m2_t test_vslideup_vx_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, vfloat64m2_t vs2, size_t rs1, size_t vl) { +vfloat64m2_t test_vslideup_vx_f64m2_mu(vbool32_t vm, vfloat64m2_t vd, + vfloat64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m2_mu(vm, vd, vs2, rs1, vl); } -vfloat64m4_t test_vslideup_vx_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, vfloat64m4_t vs2, size_t rs1, size_t vl) { +vfloat64m4_t test_vslideup_vx_f64m4_mu(vbool16_t vm, vfloat64m4_t vd, + vfloat64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m4_mu(vm, vd, vs2, rs1, vl); } -vfloat64m8_t test_vslideup_vx_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, vfloat64m8_t vs2, size_t rs1, size_t vl) { +vfloat64m8_t test_vslideup_vx_f64m8_mu(vbool8_t vm, vfloat64m8_t vd, + vfloat64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_f64m8_mu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vslideup_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vslideup_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, + vint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vslideup_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vslideup_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, + vint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vslideup_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vslideup_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, + vint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vslideup_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vslideup_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vslideup_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vslideup_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vslideup_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vslideup_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vslideup_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vslideup_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vslideup_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vslideup_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vslideup_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vslideup_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vslideup_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vslideup_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vslideup_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vslideup_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vslideup_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vslideup_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vslideup_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vslideup_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vslideup_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vslideup_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vslideup_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vslideup_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vslideup_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vslideup_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vslideup_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vslideup_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vslideup_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vslideup_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vslideup_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vslideup_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vslideup_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vslideup_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vslideup_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vslideup_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vslideup_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vslideup_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vslideup_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vslideup_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vslideup_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vslideup_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vslideup_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vslideup_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vslideup_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vslideup_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vslideup_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vslideup_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vslideup_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vslideup_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vslideup_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vslideup_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vslideup_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vslideup_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vslideup_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vslideup_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vslideup_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vslideup_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vslideup_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vslideup_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vslideup_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vslideup_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vslideup_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vslideup_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vslideup_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vslideup_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vslideup_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vslideup_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vslideup_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vslideup_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vslideup_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vslideup_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vslideup_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vslideup_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vslideup_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vslideup_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vslideup_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vslideup_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vslideup_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vslideup_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vslideup_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vslideup_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vslideup_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vslideup_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vslideup_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsll.c b/auto-generated/policy_funcs/llvm-api-tests/vsll.c index b63c36f55..9e1933279 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsll.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsll.c @@ -5,1410 +5,1804 @@ #include -vint8mf8_t test_vsll_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsll_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsll_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vsll_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vsll_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vsll_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsll_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vsll_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vsll_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vsll_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsll_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vsll_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vsll_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vsll_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vsll_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vsll_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vsll_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vsll_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vsll_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vsll_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vsll_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vsll_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vsll_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vsll_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vsll_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vsll_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vsll_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vsll_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vsll_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vsll_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vsll_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vsll_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vsll_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vsll_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsll_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vsll_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vsll_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vsll_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vsll_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsll_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vsll_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vsll_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vsll_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vsll_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vsll_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsll_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vsll_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vsll_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vsll_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vsll_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vsll_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vsll_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vsll_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vsll_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vsll_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vsll_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vsll_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vsll_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vsll_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vsll_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vsll_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vsll_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vsll_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vsll_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsll_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsll_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vsll_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vsll_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vsll_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vsll_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsll_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vsll_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vsll_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vsll_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vsll_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsll_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vsll_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vsll_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vsll_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vsll_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vsll_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vsll_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vsll_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vsll_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vsll_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vsll_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vsll_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vsll_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vsll_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vsll_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsll_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vsll_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vsll_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vsll_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vsll_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsll_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vsll_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vsll_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vsll_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vsll_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsll_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vsll_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vsll_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vsll_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vsll_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vsll_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vsll_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vsll_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vsll_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsll_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsll_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vsll_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vsll_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vsll_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsll_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsll_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vsll_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vsll_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vsll_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsll_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsll_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vsll_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vsll_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vsll_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsll_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vsll_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vsll_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vsll_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vsll_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsll_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vsll_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vsll_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vsll_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vsll_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsll_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vsll_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vsll_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vsll_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vsll_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsll_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vsll_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vsll_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vsll_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsll_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vsll_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vsll_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsll_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vsll_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vsll_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vsll_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsll_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsll_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vsll_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vsll_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vsll_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsll_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsll_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vsll_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vsll_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vsll_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsll_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsll_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vsll_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vsll_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vsll_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsll_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsll_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vsll_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vsll_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsll_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsll_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vsll_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vsll_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsll_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsll_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsll_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vsll_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vsll_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsll_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsll_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsll_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vsll_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vsll_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsll_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsll_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsll_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vsll_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vsll_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsll_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsll_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsll_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vsll_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vsll_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsll_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsll_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsll_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vsll_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vsll_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsll_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsll_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsll_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vsll_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vsll_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsll_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsll_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsll_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vsll_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vsll_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsll_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsll_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsll_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vsll_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsll_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vsll_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsll_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsll_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vsll_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsll_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsll_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsll_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vsll_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsll_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsll_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsll_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vsll_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsll_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vsll_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsll_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsll_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vsll_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsll_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vsll_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsll_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsll_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vsll_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsll_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vsll_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsll_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsll_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vsll_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsll_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vsll_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsll_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsll_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vsll_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsll_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsll_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsll_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vsll_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsll_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsll_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsll_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vsll_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsll_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vsll_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsll_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsll_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vsll_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsll_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vsll_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsll_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsll_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vsll_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsll_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vsll_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsll_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsll_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vsll_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsll_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vsll_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsll_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsll_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vsll_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsll_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsll_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsll_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vsll_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsll_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vsll_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsll_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsll_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vsll_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsll_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vsll_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsll_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsll_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vsll_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsll_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vsll_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsll_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsll_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vsll_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsll_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vsll_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsll_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsll_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vsll_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsll_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vsll_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsll_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsll_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vsll_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsll_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vsll_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsll_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsll_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vsll_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsll_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vsll_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsll_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsll_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vsll_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsll_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vsll_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsll_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsll_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vsll_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsll_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsll_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsll_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsll_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vsll_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsll_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsll_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsll_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vsll_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsll_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsll_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsll_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vsll_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsll_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsll_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsll_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsll_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vsll_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsll_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsll_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsll_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsll_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vsll_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsll_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsll_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsll_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsll_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vsll_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsll_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsll_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsll_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsll_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vsll_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsll_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vsll_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsll_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vsll_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsll_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsll_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vsll_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsll_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsll_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vsll_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsll_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsll_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vsll_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsll_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsll_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vsll_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsll_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vsll_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsll_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsll_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vsll_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsll_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsll_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vsll_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsll_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsll_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vsll_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsll_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsll_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vsll_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsll_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsll_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vsll_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsll_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsll_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vsll_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsll_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsll_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vsll_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsll_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsll_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vsll_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vsll_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsll_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsll_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vsll_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsll_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsll_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsll_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vsll_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsll_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsll_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsll_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vsll_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsll_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vsll_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsll_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsll_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vsll_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsll_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vsll_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsll_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsll_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vsll_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsll_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vsll_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsll_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsll_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vsll_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsll_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vsll_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsll_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsll_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vsll_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsll_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsll_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsll_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vsll_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsll_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsll_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsll_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vsll_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsll_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vsll_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsll_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsll_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vsll_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsll_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vsll_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsll_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsll_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vsll_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsll_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vsll_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsll_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsll_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vsll_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsll_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vsll_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsll_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsll_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vsll_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsll_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsll_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsll_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vsll_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsll_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vsll_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsll_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsll_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vsll_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsll_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vsll_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsll_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsll_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vsll_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsll_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vsll_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsll_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsll_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vsll_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsll_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vsll_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsll_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsll_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vsll_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsll_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vsll_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsll_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsll_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vsll_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsll_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vsll_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsll_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsll_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vsll_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsll_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vsll_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsll_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsll_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vsll_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsll_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vsll_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsll_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsll_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vsll_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsll_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsll_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsll_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsll_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vsll_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsll_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsll_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsll_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vsll_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsll_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsll_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsll_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vsll_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsll_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsll_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsll_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsll_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vsll_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsll_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsll_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsll_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsll_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vsll_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsll_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsll_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsll_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsll_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vsll_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsll_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsll_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsll_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsll_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vsll_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsll_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vsll_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsll_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vsll_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsll_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsll_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vsll_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsll_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vsll_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vsll_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsll_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vsll_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vsll_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsll_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vsll_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vsll_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsll_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vsll_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsll_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsll_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vsll_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsll_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsll_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vsll_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsll_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vsll_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vsll_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsll_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vsll_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vsll_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsll_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsll_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vsll_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsll_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsll_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vsll_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsll_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsll_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vsll_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsll_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vsll_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vsll_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vsll_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsll_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsll_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vsll_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsll_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsll_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsll_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vsll_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsll_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsll_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsll_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsll_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vsll_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsll_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vsll_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsll_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsll_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vsll_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsll_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vsll_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsll_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsll_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vsll_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsll_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vsll_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsll_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsll_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vsll_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsll_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vsll_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsll_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsll_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vsll_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsll_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsll_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsll_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vsll_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsll_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsll_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsll_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vsll_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsll_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vsll_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsll_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsll_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vsll_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsll_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vsll_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsll_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsll_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vsll_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsll_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vsll_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsll_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsll_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vsll_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsll_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vsll_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsll_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsll_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vsll_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsll_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsll_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsll_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vsll_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsll_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vsll_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsll_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsll_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vsll_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsll_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vsll_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsll_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsll_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vsll_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsll_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vsll_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsll_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsll_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vsll_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsll_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vsll_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsll_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsll_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vsll_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsll_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vsll_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsll_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsll_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vsll_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsll_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vsll_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsll_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsll_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vsll_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsll_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vsll_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsll_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsll_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vsll_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsll_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vsll_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsll_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsll_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vsll_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsll_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsll_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsll_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsll_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vsll_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsll_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsll_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsll_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsll_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vsll_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsll_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsll_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsll_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsll_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vsll_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsll_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsll_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsll_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsll_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vsll_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsll_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsll_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsll_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsll_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vsll_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsll_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsll_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsll_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsll_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vsll_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsll_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsll_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsll_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsll_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vsll_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsll_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsll_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vsll_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsll_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vsll_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsll_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsll_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vsll_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsll_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsll_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vsll_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsll_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsll_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vsll_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsll_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsll_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vsll_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsll_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsll_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vsll_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsll_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsll_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsll_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vsll_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsll_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsll_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vsll_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsll_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsll_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vsll_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsll_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsll_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vsll_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsll_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsll_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vsll_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsll_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsll_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vsll_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsll_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsll_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vsll_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsll_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsll_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vsll_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsll_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsmul.c b/auto-generated/policy_funcs/llvm-api-tests/vsmul.c index ad9178baa..af70b30d4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsmul.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsmul.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vsmul_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsmul_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vsmul_vv_i8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vsmul_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsmul_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsmul_vx_i8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vsmul_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsmul_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vsmul_vv_i8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vsmul_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsmul_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsmul_vx_i8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vsmul_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsmul_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vsmul_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsmul_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsmul_vx_i8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vsmul_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsmul_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vsmul_vv_i8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vsmul_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsmul_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsmul_vx_i8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vsmul_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsmul_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vsmul_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsmul_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsmul_vx_i8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vsmul_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsmul_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vsmul_vv_i8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vsmul_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsmul_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsmul_vx_i8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vsmul_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsmul_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vsmul_vv_i8m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vsmul_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsmul_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsmul_vx_i8m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vsmul_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsmul_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vsmul_vv_i16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vsmul_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsmul_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vsmul_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsmul_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vsmul_vv_i16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vsmul_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsmul_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vsmul_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsmul_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vsmul_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsmul_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsmul_vx_i16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vsmul_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsmul_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vsmul_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsmul_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsmul_vx_i16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vsmul_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsmul_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vsmul_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsmul_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsmul_vx_i16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vsmul_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsmul_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vsmul_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsmul_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsmul_vx_i16m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vsmul_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsmul_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vsmul_vv_i32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vsmul_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsmul_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vsmul_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsmul_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vsmul_vv_i32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vsmul_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsmul_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsmul_vx_i32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vsmul_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsmul_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vsmul_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsmul_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsmul_vx_i32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vsmul_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsmul_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vsmul_vv_i32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vsmul_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsmul_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsmul_vx_i32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vsmul_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsmul_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vsmul_vv_i32m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vsmul_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsmul_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsmul_vx_i32m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vsmul_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsmul_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vsmul_vv_i64m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vsmul_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsmul_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsmul_vx_i64m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vsmul_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsmul_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i64m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vsmul_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsmul_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsmul_vx_i64m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vsmul_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsmul_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vsmul_vv_i64m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vsmul_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsmul_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsmul_vx_i64m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vsmul_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsmul_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vsmul_vv_i64m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vsmul_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsmul_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsmul_vx_i64m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vsmul_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsmul_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vsmul_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsmul_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vsmul_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsmul_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vsmul_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsmul_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vsmul_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsmul_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vsmul_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsmul_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vsmul_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsmul_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vsmul_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsmul_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vsmul_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsmul_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vsmul_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsmul_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vsmul_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsmul_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vsmul_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsmul_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vsmul_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsmul_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vsmul_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsmul_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vsmul_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsmul_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vsmul_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsmul_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vsmul_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsmul_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vsmul_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsmul_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vsmul_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsmul_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vsmul_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsmul_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vsmul_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsmul_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vsmul_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsmul_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vsmul_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsmul_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vsmul_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsmul_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vsmul_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsmul_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vsmul_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsmul_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vsmul_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsmul_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vsmul_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsmul_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vsmul_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsmul_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vsmul_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsmul_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vsmul_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsmul_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vsmul_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsmul_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vsmul_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsmul_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vsmul_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsmul_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vsmul_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsmul_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vsmul_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsmul_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vsmul_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsmul_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vsmul_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsmul_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vsmul_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsmul_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vsmul_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsmul_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vsmul_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsmul_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vsmul_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsmul_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vsmul_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsmul_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vsmul_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsmul_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vsmul_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsmul_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vsmul_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsmul_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vsmul_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsmul_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vsmul_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsmul_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vsmul_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsmul_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vsmul_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsmul_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vsmul_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsmul_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vsmul_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsmul_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vsmul_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsmul_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vsmul_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsmul_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vsmul_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsmul_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vsmul_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsmul_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vsmul_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsmul_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vsmul_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsmul_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vsmul_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsmul_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vsmul_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsmul_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vsmul_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsmul_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vsmul_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsmul_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vsmul_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsmul_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vsmul_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsmul_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vsmul_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsmul_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vsmul_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsmul_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vsmul_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsmul_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vsmul_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsmul_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vsmul_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsmul_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vsmul_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsmul_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vsmul_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsmul_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vsmul_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsmul_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vsmul_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsmul_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vsmul_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsmul_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vsmul_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsmul_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vsmul_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsmul_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vsmul_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsmul_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vsmul_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsmul_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vsmul_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsmul_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vsmul_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsmul_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vsmul_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsmul_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vsmul_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsmul_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vsmul_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsmul_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vsmul_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsmul_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vsmul_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsmul_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vsmul_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsmul_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vsmul_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsmul_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vsmul_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsmul_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vsmul_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsmul_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vsmul_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsmul_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vsmul_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsmul_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vsmul_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsmul_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vsmul_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsmul_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsmul_vv_i8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vsmul_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsmul_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vsmul_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsmul_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vsmul_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsmul_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vsmul_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsmul_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vsmul_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsmul_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vsmul_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsmul_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vsmul_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsmul_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vsmul_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsmul_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i8m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vsmul_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsmul_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsmul_vx_i8m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vsmul_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsmul_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vsmul_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsmul_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vsmul_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsmul_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vsmul_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsmul_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vsmul_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsmul_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vsmul_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsmul_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vsmul_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsmul_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vsmul_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsmul_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vsmul_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsmul_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vsmul_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsmul_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vsmul_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsmul_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i16m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vsmul_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsmul_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsmul_vx_i16m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vsmul_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsmul_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsmul_vv_i32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vsmul_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsmul_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vsmul_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsmul_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vsmul_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsmul_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vsmul_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsmul_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vsmul_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsmul_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vsmul_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsmul_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vsmul_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsmul_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vsmul_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsmul_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i32m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vsmul_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsmul_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsmul_vx_i32m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vsmul_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsmul_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vsmul_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsmul_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vsmul_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsmul_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vsmul_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsmul_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vsmul_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsmul_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vsmul_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsmul_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vsmul_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsmul_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsmul_vv_i64m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vsmul_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsmul_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsmul_vx_i64m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsra.c b/auto-generated/policy_funcs/llvm-api-tests/vsra.c index 8ae20e1c3..eaa2bd0af 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsra.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsra.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vsra_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsra_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsra_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vsra_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vsra_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vsra_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsra_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsra_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vsra_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vsra_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vsra_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsra_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsra_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vsra_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vsra_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vsra_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vsra_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vsra_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vsra_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vsra_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vsra_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vsra_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vsra_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vsra_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vsra_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vsra_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vsra_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vsra_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vsra_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vsra_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vsra_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vsra_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vsra_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vsra_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vsra_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vsra_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsra_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vsra_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vsra_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vsra_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vsra_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsra_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vsra_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vsra_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vsra_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vsra_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vsra_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsra_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vsra_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vsra_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vsra_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vsra_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vsra_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vsra_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vsra_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vsra_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vsra_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vsra_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vsra_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vsra_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vsra_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vsra_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vsra_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vsra_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vsra_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vsra_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsra_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsra_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vsra_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vsra_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vsra_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vsra_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsra_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vsra_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vsra_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vsra_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vsra_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsra_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vsra_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vsra_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vsra_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vsra_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vsra_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vsra_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vsra_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vsra_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vsra_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vsra_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vsra_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vsra_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vsra_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vsra_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsra_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vsra_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vsra_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vsra_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vsra_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsra_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vsra_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vsra_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vsra_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vsra_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsra_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vsra_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vsra_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vsra_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vsra_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vsra_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vsra_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vsra_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsra_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vsra_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsra_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsra_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vsra_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsra_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsra_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsra_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vsra_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsra_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsra_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsra_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vsra_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsra_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vsra_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsra_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsra_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vsra_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsra_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vsra_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsra_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsra_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vsra_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsra_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vsra_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsra_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsra_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vsra_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsra_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vsra_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsra_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsra_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vsra_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsra_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsra_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsra_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsra_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vsra_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsra_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsra_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsra_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsra_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vsra_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsra_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vsra_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsra_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsra_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vsra_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsra_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vsra_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsra_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsra_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vsra_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsra_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vsra_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsra_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsra_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vsra_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsra_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vsra_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsra_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsra_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vsra_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsra_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsra_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsra_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsra_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vsra_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsra_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vsra_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsra_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsra_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vsra_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsra_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vsra_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsra_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsra_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vsra_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsra_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vsra_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsra_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsra_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vsra_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsra_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vsra_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsra_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsra_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vsra_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsra_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vsra_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsra_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsra_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vsra_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsra_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vsra_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsra_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsra_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vsra_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsra_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vsra_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsra_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsra_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vsra_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsra_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vsra_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsra_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsra_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vsra_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vsra_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsra_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsra_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vsra_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsra_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsra_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsra_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vsra_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsra_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsra_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsra_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vsra_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsra_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vsra_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsra_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsra_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vsra_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsra_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vsra_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsra_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsra_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vsra_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsra_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vsra_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsra_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsra_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vsra_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsra_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vsra_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsra_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsra_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vsra_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsra_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsra_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsra_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsra_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vsra_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsra_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsra_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsra_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsra_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vsra_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsra_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vsra_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsra_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsra_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vsra_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsra_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vsra_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsra_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsra_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vsra_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsra_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vsra_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsra_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsra_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vsra_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsra_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vsra_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsra_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsra_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vsra_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsra_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsra_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsra_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsra_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vsra_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsra_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vsra_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsra_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsra_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vsra_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsra_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vsra_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsra_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsra_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vsra_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsra_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vsra_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsra_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsra_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vsra_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsra_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vsra_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsra_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsra_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vsra_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsra_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vsra_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsra_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsra_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vsra_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsra_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vsra_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsra_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsra_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vsra_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsra_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vsra_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsra_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsra_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vsra_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsra_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vsra_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsra_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsra_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vsra_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vsra_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsra_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsra_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vsra_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsra_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsra_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsra_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vsra_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsra_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsra_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsra_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsra_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vsra_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsra_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vsra_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsra_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsra_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vsra_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsra_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vsra_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsra_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsra_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vsra_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsra_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vsra_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsra_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsra_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vsra_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsra_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vsra_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsra_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsra_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vsra_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsra_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsra_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsra_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsra_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vsra_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsra_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsra_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsra_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsra_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vsra_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsra_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vsra_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsra_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsra_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vsra_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsra_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vsra_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsra_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsra_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vsra_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsra_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vsra_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsra_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsra_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vsra_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsra_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vsra_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsra_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsra_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vsra_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsra_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsra_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsra_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsra_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vsra_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsra_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsra_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vsra_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsra_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsra_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vsra_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsra_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vsra_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsra_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsra_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vsra_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsra_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vsra_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsra_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsra_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vsra_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsra_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vsra_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsra_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsra_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vsra_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsra_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vsra_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsra_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsra_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vsra_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsra_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vsra_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsra_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsra_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vsra_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsra_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vsra_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsra_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsra_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vsra_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsra_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vsra_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsra_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsra_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vsra_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsra_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsrl.c b/auto-generated/policy_funcs/llvm-api-tests/vsrl.c index 36d4eaadb..c6eea5303 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsrl.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsrl.c @@ -5,706 +5,915 @@ #include -vuint8mf8_t test_vsrl_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsrl_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsrl_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vsrl_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vsrl_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vsrl_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsrl_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsrl_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vsrl_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vsrl_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vsrl_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsrl_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsrl_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vsrl_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vsrl_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vsrl_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsrl_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vsrl_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vsrl_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vsrl_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsrl_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vsrl_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vsrl_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vsrl_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsrl_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vsrl_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vsrl_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vsrl_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsrl_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vsrl_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vsrl_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vsrl_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsrl_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vsrl_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vsrl_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vsrl_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vsrl_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsrl_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vsrl_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vsrl_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vsrl_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vsrl_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsrl_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vsrl_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vsrl_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vsrl_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsrl_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vsrl_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vsrl_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vsrl_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsrl_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vsrl_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vsrl_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vsrl_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsrl_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vsrl_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vsrl_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vsrl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsrl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsrl_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vsrl_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vsrl_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vsrl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsrl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsrl_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vsrl_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vsrl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsrl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsrl_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vsrl_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vsrl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsrl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsrl_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vsrl_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vsrl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsrl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsrl_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vsrl_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vsrl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsrl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsrl_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vsrl_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vsrl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsrl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsrl_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vsrl_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vsrl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsrl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsrl_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vsrl_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vsrl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsrl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsrl_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vsrl_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vsrl_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vsrl_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsrl_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsrl_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vsrl_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsrl_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsrl_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsrl_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vsrl_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsrl_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsrl_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsrl_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vsrl_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsrl_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsrl_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsrl_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vsrl_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsrl_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsrl_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsrl_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vsrl_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsrl_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsrl_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsrl_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vsrl_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsrl_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsrl_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsrl_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vsrl_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsrl_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsrl_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsrl_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vsrl_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsrl_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsrl_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsrl_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vsrl_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsrl_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsrl_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsrl_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vsrl_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsrl_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsrl_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsrl_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vsrl_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsrl_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsrl_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsrl_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vsrl_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsrl_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsrl_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsrl_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vsrl_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsrl_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsrl_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsrl_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vsrl_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsrl_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsrl_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsrl_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vsrl_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsrl_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsrl_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsrl_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vsrl_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsrl_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsrl_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsrl_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vsrl_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsrl_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsrl_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsrl_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vsrl_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsrl_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsrl_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsrl_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsrl_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vsrl_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsrl_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsrl_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsrl_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vsrl_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsrl_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsrl_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsrl_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vsrl_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsrl_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsrl_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsrl_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vsrl_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsrl_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsrl_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsrl_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vsrl_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsrl_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsrl_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsrl_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vsrl_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsrl_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsrl_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsrl_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vsrl_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsrl_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsrl_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsrl_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vsrl_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsrl_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsrl_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsrl_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vsrl_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsrl_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsrl_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsrl_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vsrl_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsrl_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsrl_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsrl_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vsrl_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsrl_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsrl_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsrl_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vsrl_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsrl_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsrl_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsrl_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vsrl_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsrl_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsrl_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsrl_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vsrl_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsrl_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsrl_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsrl_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vsrl_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsrl_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsrl_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsrl_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vsrl_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsrl_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsrl_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsrl_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vsrl_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsrl_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsrl_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsrl_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vsrl_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsrl_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsrl_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsrl_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vsrl_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsrl_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsrl_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsrl_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vsrl_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsrl_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsrl_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsrl_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vsrl_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsrl_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsrl_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsrl_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vsrl_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsrl_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsrl_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsrl_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsrl_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vsrl_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsrl_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsrl_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsrl_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vsrl_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsrl_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsrl_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsrl_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vsrl_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsrl_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsrl_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vsrl_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsrl_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vsrl_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsrl_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsrl_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsrl_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsrl_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vsrl_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsrl_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsrl_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsrl_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsrl_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vsrl_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsrl_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsrl_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsrl_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsrl_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vsrl_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsrl_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsrl_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsrl_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vsrl_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsrl_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsrl_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsrl_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vsrl_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsrl_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsrl_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsrl_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vsrl_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsrl_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsrl_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsrl_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vsrl_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsrl_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsrl_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsrl_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vsrl_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsrl_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsrl_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsrl_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vsrl_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsrl_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsrl_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsrl_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vsrl_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsrl_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsrl_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsrl_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vsrl_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsrl_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsrl_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsrl_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vsrl_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsrl_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsrl_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsrl_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vsrl_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsrl_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsrl_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsrl_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsrl_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vsrl_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsrl_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsrl_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsrl_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vsrl_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsrl_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsrl_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsrl_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vsrl_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsrl_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsrl_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsrl_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vsrl_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsrl_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsrl_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsrl_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vsrl_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsrl_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsrl_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsrl_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vsrl_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsrl_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsrl_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsrl_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vsrl_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsrl_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsrl_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsrl_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vsrl_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsrl_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsrl_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsrl_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsrl_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vsrl_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vsrl_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vssra.c b/auto-generated/policy_funcs/llvm-api-tests/vssra.c index 8a636f455..508444f8b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vssra.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vssra.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vssra_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vssra_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vssra_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vssra_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vssra_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vssra_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vssra_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vssra_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vssra_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vssra_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vssra_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vssra_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vssra_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vssra_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vssra_vv_i8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vssra_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vssra_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vssra_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vssra_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vssra_vv_i8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vssra_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vssra_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vssra_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vssra_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vssra_vv_i8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vssra_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vssra_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vssra_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vssra_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vssra_vv_i8m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vssra_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vssra_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i8m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vssra_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vssra_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vssra_vv_i16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vssra_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vssra_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vssra_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vssra_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vssra_vv_i16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vssra_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vssra_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vssra_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vssra_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vssra_vv_i16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vssra_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vssra_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vssra_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vssra_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vssra_vv_i16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vssra_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vssra_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vssra_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vssra_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vssra_vv_i16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vssra_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vssra_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vssra_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vssra_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vssra_vv_i16m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vssra_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vssra_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i16m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vssra_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vssra_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vssra_vv_i32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vssra_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vssra_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vssra_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vssra_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vssra_vv_i32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vssra_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vssra_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vssra_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vssra_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vssra_vv_i32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vssra_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vssra_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vssra_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vssra_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vssra_vv_i32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vssra_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vssra_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vssra_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vssra_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vssra_vv_i32m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vssra_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vssra_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i32m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vssra_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vssra_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vssra_vv_i64m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vssra_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vssra_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i64m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vssra_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vssra_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vssra_vv_i64m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vssra_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vssra_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i64m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vssra_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vssra_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vssra_vv_i64m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vssra_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vssra_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i64m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vssra_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vssra_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vssra_vv_i64m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vssra_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vssra_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssra_vx_i64m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vssra_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vssra_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vssra_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vssra_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vssra_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vssra_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vssra_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vssra_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vssra_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vssra_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vssra_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vssra_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vssra_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vssra_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssra_vv_i8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vssra_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vssra_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vssra_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vssra_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssra_vv_i8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vssra_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vssra_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vssra_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vssra_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssra_vv_i8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vssra_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vssra_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vssra_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vssra_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssra_vv_i8m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vssra_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vssra_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vssra_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vssra_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssra_vv_i16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vssra_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vssra_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vssra_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vssra_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssra_vv_i16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vssra_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vssra_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vssra_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vssra_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vssra_vv_i16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vssra_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vssra_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vssra_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vssra_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vssra_vv_i16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vssra_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vssra_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vssra_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vssra_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vssra_vv_i16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vssra_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vssra_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vssra_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vssra_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vssra_vv_i16m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vssra_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vssra_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vssra_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vssra_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssra_vv_i32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vssra_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vssra_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vssra_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vssra_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vssra_vv_i32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vssra_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vssra_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vssra_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vssra_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vssra_vv_i32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vssra_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vssra_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vssra_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vssra_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vssra_vv_i32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vssra_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vssra_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vssra_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vssra_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vssra_vv_i32m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vssra_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vssra_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vssra_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vssra_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vssra_vv_i64m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vssra_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vssra_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vssra_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vssra_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vssra_vv_i64m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vssra_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vssra_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vssra_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vssra_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vssra_vv_i64m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vssra_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vssra_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vssra_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vssra_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vssra_vv_i64m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vssra_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vssra_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vssra_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vssra_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vssra_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vssra_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vssra_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vssra_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vssra_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vssra_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vssra_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vssra_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vssra_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vssra_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vssra_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vssra_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssra_vv_i8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vssra_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vssra_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vssra_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vssra_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssra_vv_i8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vssra_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vssra_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vssra_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vssra_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssra_vv_i8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vssra_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vssra_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vssra_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vssra_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssra_vv_i8m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vssra_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vssra_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vssra_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vssra_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssra_vv_i16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vssra_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vssra_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vssra_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vssra_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssra_vv_i16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vssra_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vssra_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vssra_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vssra_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vssra_vv_i16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vssra_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vssra_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vssra_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vssra_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vssra_vv_i16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vssra_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vssra_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vssra_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vssra_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vssra_vv_i16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vssra_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vssra_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vssra_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vssra_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vssra_vv_i16m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vssra_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vssra_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vssra_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vssra_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssra_vv_i32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vssra_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vssra_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vssra_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vssra_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vssra_vv_i32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vssra_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vssra_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vssra_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vssra_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vssra_vv_i32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vssra_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vssra_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vssra_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vssra_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vssra_vv_i32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vssra_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vssra_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vssra_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vssra_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vssra_vv_i32m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vssra_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vssra_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vssra_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vssra_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vssra_vv_i64m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vssra_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vssra_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vssra_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vssra_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vssra_vv_i64m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vssra_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vssra_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vssra_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vssra_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vssra_vv_i64m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vssra_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vssra_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vssra_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vssra_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vssra_vv_i64m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vssra_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vssra_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vssra_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vssra_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf8_t test_vssra_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, size_t rs1, size_t vl) { +vint8mf8_t test_vssra_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vssra_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vssra_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf4_t test_vssra_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, size_t rs1, size_t vl) { +vint8mf4_t test_vssra_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vssra_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vssra_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vssra_vv_i8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8mf2_t test_vssra_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, size_t rs1, size_t vl) { +vint8mf2_t test_vssra_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vssra_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint8m1_t test_vssra_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssra_vv_i8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m1_t test_vssra_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, size_t rs1, size_t vl) { +vint8m1_t test_vssra_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vssra_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint8m2_t test_vssra_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssra_vv_i8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m2_t test_vssra_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, size_t rs1, size_t vl) { +vint8m2_t test_vssra_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vssra_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint8m4_t test_vssra_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssra_vv_i8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m4_t test_vssra_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, size_t rs1, size_t vl) { +vint8m4_t test_vssra_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vssra_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vint8m8_t test_vssra_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssra_vv_i8m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint8m8_t test_vssra_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, size_t rs1, size_t vl) { +vint8m8_t test_vssra_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i8m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vssra_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vssra_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssra_vv_i16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf4_t test_vssra_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, size_t rs1, size_t vl) { +vint16mf4_t test_vssra_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vssra_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vssra_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssra_vv_i16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16mf2_t test_vssra_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, size_t rs1, size_t vl) { +vint16mf2_t test_vssra_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vssra_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint16m1_t test_vssra_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vssra_vv_i16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m1_t test_vssra_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, size_t rs1, size_t vl) { +vint16m1_t test_vssra_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vssra_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint16m2_t test_vssra_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vssra_vv_i16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m2_t test_vssra_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, size_t rs1, size_t vl) { +vint16m2_t test_vssra_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vssra_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint16m4_t test_vssra_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vssra_vv_i16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m4_t test_vssra_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, size_t rs1, size_t vl) { +vint16m4_t test_vssra_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vssra_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vint16m8_t test_vssra_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vssra_vv_i16m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint16m8_t test_vssra_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, size_t rs1, size_t vl) { +vint16m8_t test_vssra_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i16m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vssra_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vssra_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssra_vv_i32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32mf2_t test_vssra_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, size_t rs1, size_t vl) { +vint32mf2_t test_vssra_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssra_vx_i32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vssra_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint32m1_t test_vssra_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vssra_vv_i32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m1_t test_vssra_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, size_t rs1, size_t vl) { +vint32m1_t test_vssra_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vssra_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint32m2_t test_vssra_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vssra_vv_i32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m2_t test_vssra_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, size_t rs1, size_t vl) { +vint32m2_t test_vssra_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vssra_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint32m4_t test_vssra_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vssra_vv_i32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m4_t test_vssra_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, size_t rs1, size_t vl) { +vint32m4_t test_vssra_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vssra_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vint32m8_t test_vssra_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vssra_vv_i32m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint32m8_t test_vssra_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, size_t rs1, size_t vl) { +vint32m8_t test_vssra_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i32m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vssra_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vint64m1_t test_vssra_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vssra_vv_i64m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m1_t test_vssra_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, size_t rs1, size_t vl) { +vint64m1_t test_vssra_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vssra_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vint64m2_t test_vssra_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vssra_vv_i64m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m2_t test_vssra_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, size_t rs1, size_t vl) { +vint64m2_t test_vssra_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vssra_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vint64m4_t test_vssra_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vssra_vv_i64m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m4_t test_vssra_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, size_t rs1, size_t vl) { +vint64m4_t test_vssra_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vssra_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vint64m8_t test_vssra_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vssra_vv_i64m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vint64m8_t test_vssra_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, size_t rs1, size_t vl) { +vint64m8_t test_vssra_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssra_vx_i64m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vssrl.c b/auto-generated/policy_funcs/llvm-api-tests/vssrl.c index e2f0644a3..1dbc1cf43 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vssrl.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vssrl.c @@ -5,706 +5,933 @@ #include -vuint8mf8_t test_vssrl_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vssrl_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vssrl_vv_u8mf8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vssrl_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vssrl_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u8mf8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vssrl_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vssrl_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vssrl_vv_u8mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vssrl_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vssrl_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u8mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vssrl_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vssrl_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vssrl_vv_u8mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vssrl_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vssrl_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u8mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vssrl_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vssrl_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vssrl_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vssrl_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u8m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vssrl_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vssrl_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vssrl_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vssrl_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u8m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vssrl_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vssrl_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vssrl_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vssrl_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u8m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vssrl_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vssrl_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vssrl_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vssrl_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u8m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vssrl_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vssrl_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vssrl_vv_u16mf4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vssrl_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vssrl_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16mf4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vssrl_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vssrl_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vssrl_vv_u16mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vssrl_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vssrl_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vssrl_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vssrl_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vssrl_vv_u16m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vssrl_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vssrl_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u16m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vssrl_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vssrl_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vssrl_vv_u16m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vssrl_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vssrl_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u16m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vssrl_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vssrl_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vssrl_vv_u16m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vssrl_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vssrl_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u16m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vssrl_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vssrl_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u16m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vssrl_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vssrl_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u16m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vssrl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vssrl_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vssrl_vv_u32mf2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vssrl_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vssrl_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32mf2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vssrl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vssrl_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vssrl_vv_u32m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vssrl_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vssrl_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u32m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vssrl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vssrl_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vssrl_vv_u32m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vssrl_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vssrl_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u32m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vssrl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vssrl_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vssrl_vv_u32m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vssrl_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vssrl_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u32m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vssrl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vssrl_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u32m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vssrl_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vssrl_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u32m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vssrl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vssrl_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vssrl_vv_u64m1_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vssrl_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vssrl_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u64m1_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vssrl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vssrl_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vssrl_vv_u64m2_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vssrl_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vssrl_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u64m2_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vssrl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vssrl_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vssrl_vv_u64m4_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vssrl_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vssrl_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u64m4_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vssrl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vssrl_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u64m8_tu(vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vssrl_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vssrl_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u64m8_tu(vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vssrl_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vssrl_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vssrl_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vssrl_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vssrl_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vssrl_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vssrl_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vssrl_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vssrl_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vssrl_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vssrl_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vssrl_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vssrl_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vssrl_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vssrl_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vssrl_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vssrl_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vssrl_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vssrl_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vssrl_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vssrl_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vssrl_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vssrl_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vssrl_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vssrl_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vssrl_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vssrl_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vssrl_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vssrl_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vssrl_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16mf4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vssrl_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vssrl_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16mf4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vssrl_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vssrl_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vssrl_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vssrl_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vssrl_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vssrl_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vssrl_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vssrl_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vssrl_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vssrl_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vssrl_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vssrl_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vssrl_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vssrl_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vssrl_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vssrl_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vssrl_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vssrl_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vssrl_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vssrl_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vssrl_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vssrl_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32mf2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vssrl_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vssrl_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32mf2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vssrl_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vssrl_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vssrl_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vssrl_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vssrl_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vssrl_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vssrl_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vssrl_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vssrl_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vssrl_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vssrl_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vssrl_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vssrl_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vssrl_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vssrl_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vssrl_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vssrl_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vssrl_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m1_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vssrl_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vssrl_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m1_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vssrl_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vssrl_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m2_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vssrl_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vssrl_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m2_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vssrl_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vssrl_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m4_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vssrl_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vssrl_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m4_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vssrl_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vssrl_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m8_tum(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vssrl_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vssrl_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m8_tum(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vssrl_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vssrl_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vssrl_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vssrl_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vssrl_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vssrl_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vssrl_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vssrl_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vssrl_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vssrl_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vssrl_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vssrl_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vssrl_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vssrl_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vssrl_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vssrl_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vssrl_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vssrl_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vssrl_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vssrl_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vssrl_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vssrl_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vssrl_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vssrl_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vssrl_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vssrl_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vssrl_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vssrl_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vssrl_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vssrl_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16mf4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vssrl_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vssrl_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u16mf4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vssrl_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vssrl_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vssrl_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vssrl_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u16mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vssrl_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vssrl_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vssrl_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vssrl_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vssrl_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vssrl_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vssrl_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vssrl_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vssrl_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vssrl_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vssrl_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vssrl_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vssrl_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vssrl_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vssrl_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vssrl_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vssrl_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vssrl_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32mf2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vssrl_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vssrl_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, + size_t vl) { return __riscv_vssrl_vx_u32mf2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vssrl_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vssrl_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vssrl_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vssrl_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vssrl_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vssrl_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vssrl_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vssrl_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vssrl_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vssrl_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vssrl_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vssrl_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vssrl_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vssrl_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vssrl_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vssrl_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vssrl_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vssrl_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m1_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vssrl_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vssrl_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m1_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vssrl_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vssrl_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m2_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vssrl_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vssrl_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m2_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vssrl_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vssrl_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m4_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vssrl_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vssrl_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m4_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vssrl_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vssrl_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m8_tumu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vssrl_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vssrl_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m8_tumu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vssrl_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vssrl_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf8_t test_vssrl_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, size_t rs1, size_t vl) { +vuint8mf8_t test_vssrl_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vssrl_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vssrl_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf4_t test_vssrl_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, size_t rs1, size_t vl) { +vuint8mf4_t test_vssrl_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vssrl_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vssrl_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u8mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8mf2_t test_vssrl_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, size_t rs1, size_t vl) { +vuint8mf2_t test_vssrl_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vssrl_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vssrl_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m1_t test_vssrl_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, size_t rs1, size_t vl) { +vuint8m1_t test_vssrl_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vssrl_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vssrl_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m2_t test_vssrl_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, size_t rs1, size_t vl) { +vuint8m2_t test_vssrl_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vssrl_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vssrl_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m4_t test_vssrl_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, size_t rs1, size_t vl) { +vuint8m4_t test_vssrl_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vssrl_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vssrl_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u8m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint8m8_t test_vssrl_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, size_t rs1, size_t vl) { +vuint8m8_t test_vssrl_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u8m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vssrl_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vssrl_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16mf4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf4_t test_vssrl_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, size_t rs1, size_t vl) { +vuint16mf4_t test_vssrl_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16mf4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vssrl_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vssrl_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16mf2_t test_vssrl_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, size_t rs1, size_t vl) { +vuint16mf2_t test_vssrl_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vssrl_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vssrl_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u16m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m1_t test_vssrl_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, size_t rs1, size_t vl) { +vuint16m1_t test_vssrl_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vssrl_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vssrl_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vssrl_vv_u16m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m2_t test_vssrl_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, size_t rs1, size_t vl) { +vuint16m2_t test_vssrl_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vssrl_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vssrl_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vssrl_vv_u16m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m4_t test_vssrl_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, size_t rs1, size_t vl) { +vuint16m4_t test_vssrl_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vssrl_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vssrl_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u16m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint16m8_t test_vssrl_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, size_t rs1, size_t vl) { +vuint16m8_t test_vssrl_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u16m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vssrl_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vssrl_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32mf2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32mf2_t test_vssrl_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, size_t rs1, size_t vl) { +vuint32mf2_t test_vssrl_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32mf2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vssrl_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vssrl_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m1_t test_vssrl_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, size_t rs1, size_t vl) { +vuint32m1_t test_vssrl_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vssrl_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vssrl_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u32m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m2_t test_vssrl_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, size_t rs1, size_t vl) { +vuint32m2_t test_vssrl_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vssrl_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vssrl_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vssrl_vv_u32m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m4_t test_vssrl_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, size_t rs1, size_t vl) { +vuint32m4_t test_vssrl_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vssrl_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vssrl_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u32m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint32m8_t test_vssrl_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, size_t rs1, size_t vl) { +vuint32m8_t test_vssrl_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u32m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vssrl_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vssrl_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m1_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m1_t test_vssrl_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, size_t rs1, size_t vl) { +vuint64m1_t test_vssrl_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m1_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vssrl_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vssrl_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m2_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m2_t test_vssrl_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, size_t rs1, size_t vl) { +vuint64m2_t test_vssrl_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m2_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vssrl_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vssrl_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vssrl_vv_u64m4_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m4_t test_vssrl_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, size_t rs1, size_t vl) { +vuint64m4_t test_vssrl_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m4_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vssrl_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vssrl_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vssrl_vv_u64m8_mu(vm, vd, vs2, vs1, __RISCV_VXRM_RNU, vl); } -vuint64m8_t test_vssrl_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, size_t rs1, size_t vl) { +vuint64m8_t test_vssrl_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + size_t rs1, size_t vl) { return __riscv_vssrl_vx_u64m8_mu(vm, vd, vs2, rs1, __RISCV_VXRM_RNU, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vssub.c b/auto-generated/policy_funcs/llvm-api-tests/vssub.c index 2abb1a006..e57019f8c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vssub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vssub.c @@ -5,706 +5,891 @@ #include -vint8mf8_t test_vssub_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vssub_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vssub_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vssub_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vssub_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vssub_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vssub_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vssub_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vssub_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vssub_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vssub_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vssub_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vssub_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vssub_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vssub_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vssub_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vssub_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vssub_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vssub_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vssub_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vssub_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vssub_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vssub_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vssub_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vssub_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vssub_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vssub_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vssub_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vssub_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vssub_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vssub_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vssub_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vssub_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vssub_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vssub_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vssub_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vssub_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vssub_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vssub_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vssub_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vssub_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vssub_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vssub_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vssub_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vssub_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vssub_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vssub_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vssub_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vssub_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vssub_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vssub_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vssub_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vssub_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vssub_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vssub_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vssub_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vssub_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vssub_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vssub_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vssub_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vssub_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vssub_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vssub_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vssub_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vssub_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vssub_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vssub_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vssub_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vssub_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vssub_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vssub_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vssub_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vssub_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vssub_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vssub_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vssub_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vssub_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vssub_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vssub_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vssub_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vssub_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vssub_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vssub_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vssub_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vssub_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vssub_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vssub_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vssub_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vssub_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vssub_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vssub_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vssub_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vssub_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vssub_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vssub_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vssub_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vssub_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vssub_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vssub_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vssub_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vssub_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vssub_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vssub_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vssub_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vssub_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vssub_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vssub_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vssub_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vssub_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vssub_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vssub_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vssub_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vssub_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vssub_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vssub_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vssub_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vssub_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vssub_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vssub_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vssub_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vssub_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vssub_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vssub_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vssub_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vssub_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vssub_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vssub_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vssub_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vssub_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vssub_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vssub_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vssub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vssub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vssub_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vssub_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vssub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vssub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vssub_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vssub_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vssub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vssub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vssub_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vssub_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vssub_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vssub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vssub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vssub_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vssub_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vssub_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vssub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vssub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vssub_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vssub_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vssub_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vssub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vssub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vssub_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vssub_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vssub_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vssub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vssub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vssub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vssub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vssub_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vssub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vssub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vssub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vssub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vssub_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vssub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vssub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vssub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vssub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vssub_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vssub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vssub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vssub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vssub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vssub_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vssub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vssub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vssub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vssub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vssub_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vssub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vssub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vssub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vssub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vssub_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vssub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vssub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vssub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vssub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vssub_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vssub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vssub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vssub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vssub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vssub_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vssub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vssub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vssub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vssub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vssub_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vssub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vssub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vssub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vssub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vssub_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vssub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vssub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vssub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vssub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vssub_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vssub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vssub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vssub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vssub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vssub_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vssub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vssub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vssub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vssub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vssub_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vssub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vssub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vssub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vssub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vssub_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vssub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vssub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vssub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vssub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vssub_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vssub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vssub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vssub_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vssub_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vssub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vssub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vssub_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vssub_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vssub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vssub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vssub_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vssub_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vssub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vssub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vssub_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vssub_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vssub_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vssub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vssub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vssub_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vssub_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vssub_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vssub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vssub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vssub_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vssub_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vssub_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vssub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vssub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vssub_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vssub_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vssub_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vssub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vssub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vssub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vssub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vssub_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vssub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vssub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vssub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vssub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vssub_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vssub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vssub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vssub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vssub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vssub_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vssub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vssub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vssub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vssub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vssub_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vssub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vssub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vssub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vssub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vssub_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vssub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vssub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vssub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vssub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vssub_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vssub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vssub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vssub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vssub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vssub_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vssub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vssub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vssub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vssub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vssub_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vssub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vssub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vssub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vssub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vssub_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vssub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vssub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vssub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vssub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vssub_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vssub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vssub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vssub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vssub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vssub_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vssub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vssub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vssub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vssub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vssub_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vssub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vssub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vssub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vssub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vssub_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vssub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vssub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vssub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vssub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vssub_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vssub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vssub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vssub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vssub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vssub_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vssub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vssub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vssub_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vssub_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vssub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vssub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vssub_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vssub_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vssub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vssub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vssub_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vssub_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vssub_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vssub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vssub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vssub_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vssub_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vssub_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vssub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vssub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vssub_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vssub_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vssub_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vssub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vssub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vssub_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vssub_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vssub_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vssub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vssub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vssub_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vssub_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vssub_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vssub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vssub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vssub_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vssub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vssub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vssub_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vssub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vssub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vssub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vssub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vssub_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vssub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vssub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vssub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vssub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vssub_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vssub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vssub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vssub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vssub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vssub_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vssub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vssub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vssub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vssub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vssub_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vssub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vssub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vssub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vssub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vssub_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vssub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vssub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vssub_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vssub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vssub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vssub_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vssub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vssub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vssub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vssub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vssub_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vssub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vssub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vssub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vssub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vssub_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vssub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vssub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vssub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vssub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vssub_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vssub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vssub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vssub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vssub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vssub_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vssub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vssub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vssub_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vssub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vssub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vssub_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vssub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vssub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vssub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vssub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vssub_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vssub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vssub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vssub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vssub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vssub_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vssub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vssub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vssub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vssub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vssub_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vssub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vssub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vssub_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vssubu.c b/auto-generated/policy_funcs/llvm-api-tests/vssubu.c index ad110849e..398d76664 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vssubu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vssubu.c @@ -5,706 +5,957 @@ #include -vuint8mf8_t test_vssubu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vssubu_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vssubu_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vssubu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vssubu_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vssubu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vssubu_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vssubu_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vssubu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vssubu_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vssubu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vssubu_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vssubu_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vssubu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vssubu_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vssubu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vssubu_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vssubu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vssubu_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vssubu_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vssubu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vssubu_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vssubu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vssubu_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vssubu_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vssubu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vssubu_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vssubu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vssubu_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vssubu_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vssubu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vssubu_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vssubu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vssubu_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vssubu_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vssubu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vssubu_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vssubu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vssubu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vssubu_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vssubu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vssubu_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vssubu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vssubu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vssubu_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vssubu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vssubu_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vssubu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vssubu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vssubu_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vssubu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vssubu_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vssubu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vssubu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vssubu_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vssubu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vssubu_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vssubu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vssubu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vssubu_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vssubu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vssubu_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vssubu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vssubu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vssubu_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vssubu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vssubu_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vssubu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vssubu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vssubu_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vssubu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vssubu_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vssubu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vssubu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vssubu_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vssubu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vssubu_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vssubu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vssubu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vssubu_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vssubu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vssubu_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vssubu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vssubu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vssubu_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vssubu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vssubu_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vssubu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vssubu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vssubu_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vssubu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vssubu_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vssubu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vssubu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vssubu_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vssubu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vssubu_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vssubu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vssubu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vssubu_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vssubu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vssubu_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vssubu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vssubu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vssubu_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vssubu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vssubu_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vssubu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vssubu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vssubu_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vssubu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vssubu_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vssubu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vssubu_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vssubu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vssubu_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vssubu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vssubu_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vssubu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vssubu_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vssubu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vssubu_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vssubu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vssubu_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vssubu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vssubu_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vssubu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vssubu_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vssubu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vssubu_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vssubu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vssubu_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vssubu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vssubu_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vssubu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vssubu_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vssubu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vssubu_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vssubu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vssubu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vssubu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vssubu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vssubu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vssubu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vssubu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vssubu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vssubu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vssubu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vssubu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vssubu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vssubu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vssubu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vssubu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vssubu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vssubu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vssubu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vssubu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vssubu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vssubu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vssubu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vssubu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vssubu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vssubu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vssubu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vssubu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vssubu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vssubu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vssubu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vssubu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vssubu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vssubu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vssubu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vssubu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vssubu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vssubu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vssubu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vssubu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vssubu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vssubu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vssubu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vssubu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vssubu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vssubu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vssubu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vssubu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vssubu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vssubu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vssubu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vssubu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vssubu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vssubu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vssubu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vssubu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vssubu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vssubu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vssubu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vssubu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vssubu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vssubu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vssubu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vssubu_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vssubu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vssubu_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vssubu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vssubu_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vssubu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vssubu_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vssubu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vssubu_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vssubu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vssubu_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vssubu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vssubu_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vssubu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vssubu_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vssubu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vssubu_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vssubu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vssubu_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vssubu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vssubu_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vssubu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vssubu_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vssubu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vssubu_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vssubu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vssubu_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vssubu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vssubu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vssubu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vssubu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vssubu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vssubu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vssubu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vssubu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vssubu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vssubu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vssubu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vssubu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vssubu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vssubu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vssubu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vssubu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vssubu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vssubu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vssubu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vssubu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vssubu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vssubu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vssubu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vssubu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vssubu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vssubu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vssubu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vssubu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vssubu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vssubu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vssubu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vssubu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vssubu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vssubu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vssubu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vssubu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vssubu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vssubu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vssubu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vssubu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vssubu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vssubu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vssubu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vssubu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vssubu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vssubu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vssubu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vssubu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vssubu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vssubu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vssubu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vssubu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vssubu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vssubu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vssubu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vssubu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vssubu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vssubu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vssubu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vssubu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vssubu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vssubu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vssubu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vssubu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vssubu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vssubu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vssubu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vssubu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vssubu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vssubu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vssubu_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vssubu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vssubu_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vssubu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vssubu_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vssubu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vssubu_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vssubu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vssubu_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vssubu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vssubu_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vssubu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vssubu_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vssubu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vssubu_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vssubu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vssubu_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vssubu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vssubu_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vssubu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vssubu_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vssubu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vssubu_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vssubu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vssubu_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vssubu_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vssubu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vssubu_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vssubu_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vssubu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vssubu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vssubu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vssubu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vssubu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vssubu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vssubu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vssubu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vssubu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vssubu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vssubu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vssubu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vssubu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vssubu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vssubu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vssubu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vssubu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vssubu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vssubu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vssubu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vssubu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vssubu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vssubu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vssubu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vssubu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vssubu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vssubu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vssubu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vssubu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vssubu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vssubu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vssubu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vssubu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vssubu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vssubu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vssubu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vssubu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vssubu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vssubu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vssubu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vssubu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vssubu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vssubu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vssubu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vssubu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vssubu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vssubu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vssubu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vssubu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vssubu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vssubu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vssubu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vssubu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vssubu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vssubu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vssubu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vssubu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vssubu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vssubu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vssubu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vssubu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vssubu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vssubu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vssubu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vssubu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vssubu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vsub.c b/auto-generated/policy_funcs/llvm-api-tests/vsub.c index 7a28b0251..62d512bbc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vsub.c @@ -5,1410 +5,1810 @@ #include -vint8mf8_t test_vsub_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsub_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vsub_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vsub_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsub_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsub_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vsub_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsub_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vsub_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsub_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsub_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vsub_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsub_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vsub_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsub_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsub_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vsub_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsub_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vsub_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vsub_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsub_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsub_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vsub_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsub_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vsub_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vsub_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsub_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsub_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vsub_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsub_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vsub_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vsub_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsub_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsub_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vsub_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsub_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vsub_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vsub_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsub_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vsub_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vsub_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsub_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vsub_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vsub_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsub_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsub_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vsub_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsub_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vsub_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vsub_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsub_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsub_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vsub_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsub_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vsub_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vsub_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsub_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsub_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vsub_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsub_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vsub_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vsub_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsub_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsub_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vsub_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsub_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vsub_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vsub_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsub_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsub_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vsub_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsub_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vsub_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vsub_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsub_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vsub_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vsub_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsub_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vsub_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vsub_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsub_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsub_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vsub_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsub_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vsub_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vsub_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsub_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsub_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vsub_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsub_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vsub_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vsub_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsub_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsub_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vsub_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsub_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vsub_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vsub_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsub_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsub_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vsub_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsub_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vsub_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vsub_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsub_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vsub_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vsub_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsub_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vsub_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vsub_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsub_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsub_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vsub_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsub_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vsub_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vsub_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsub_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsub_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vsub_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsub_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vsub_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vsub_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsub_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsub_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vsub_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsub_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vsub_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vsub_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsub_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vsub_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vsub_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsub_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsub_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vsub_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vsub_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsub_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vsub_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsub_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsub_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vsub_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vsub_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsub_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vsub_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsub_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsub_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vsub_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vsub_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsub_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vsub_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsub_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vsub_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vsub_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vsub_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsub_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vsub_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsub_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vsub_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vsub_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vsub_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsub_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vsub_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsub_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vsub_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vsub_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vsub_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsub_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vsub_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsub_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vsub_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vsub_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vsub_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vsub_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vsub_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsub_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vsub_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vsub_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vsub_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vsub_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsub_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vsub_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vsub_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vsub_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vsub_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsub_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsub_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vsub_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vsub_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsub_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vsub_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsub_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsub_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vsub_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vsub_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsub_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vsub_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsub_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsub_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vsub_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vsub_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsub_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vsub_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsub_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsub_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vsub_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vsub_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsub_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vsub_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsub_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vsub_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vsub_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vsub_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vsub_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsub_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsub_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vsub_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vsub_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsub_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vsub_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsub_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsub_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vsub_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vsub_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsub_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vsub_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsub_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsub_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vsub_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vsub_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsub_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vsub_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsub_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsub_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vsub_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vsub_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsub_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vsub_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsub_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsub_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vsub_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vsub_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vsub_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vsub_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsub_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsub_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vsub_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vsub_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vsub_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vsub_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsub_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsub_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vsub_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vsub_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vsub_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vsub_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsub_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsub_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vsub_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vsub_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vsub_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vsub_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsub_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsub_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsub_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsub_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsub_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsub_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsub_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsub_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsub_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsub_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsub_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsub_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsub_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsub_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsub_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsub_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsub_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsub_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsub_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsub_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsub_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsub_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsub_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsub_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsub_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsub_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsub_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsub_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsub_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsub_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsub_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsub_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsub_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsub_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsub_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsub_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsub_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsub_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsub_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsub_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vsub_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsub_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsub_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsub_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vsub_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsub_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsub_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsub_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vsub_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsub_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsub_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsub_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsub_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vsub_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsub_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsub_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsub_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsub_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vsub_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsub_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsub_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsub_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsub_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vsub_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsub_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsub_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsub_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsub_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vsub_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsub_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsub_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsub_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vsub_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsub_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsub_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsub_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsub_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vsub_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsub_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsub_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsub_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsub_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsub_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vsub_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsub_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsub_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsub_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsub_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vsub_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsub_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsub_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsub_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsub_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vsub_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsub_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsub_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsub_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsub_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vsub_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsub_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsub_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsub_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vsub_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsub_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsub_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsub_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsub_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsub_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vsub_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsub_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsub_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsub_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsub_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vsub_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsub_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsub_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsub_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsub_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vsub_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsub_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsub_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsub_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsub_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vsub_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsub_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsub_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsub_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsub_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vsub_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsub_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsub_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsub_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsub_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vsub_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsub_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsub_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsub_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsub_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vsub_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsub_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsub_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsub_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsub_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vsub_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vsub_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsub_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsub_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsub_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsub_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsub_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsub_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsub_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsub_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsub_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsub_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsub_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsub_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsub_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsub_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsub_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsub_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsub_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsub_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsub_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsub_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsub_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsub_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsub_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsub_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsub_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsub_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsub_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsub_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsub_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsub_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsub_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsub_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsub_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsub_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsub_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsub_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsub_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsub_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vsub_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsub_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vsub_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsub_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsub_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsub_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vsub_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsub_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsub_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsub_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vsub_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsub_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsub_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsub_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsub_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vsub_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsub_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsub_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsub_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsub_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vsub_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsub_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsub_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsub_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsub_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vsub_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsub_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsub_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsub_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsub_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vsub_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsub_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsub_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsub_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vsub_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsub_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsub_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsub_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsub_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vsub_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vsub_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsub_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsub_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vsub_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsub_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vsub_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsub_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsub_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vsub_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsub_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vsub_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsub_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsub_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vsub_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsub_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vsub_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsub_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsub_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vsub_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsub_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vsub_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsub_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsub_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsub_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vsub_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vsub_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsub_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsub_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vsub_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsub_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vsub_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsub_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsub_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vsub_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsub_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vsub_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsub_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsub_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vsub_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsub_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vsub_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsub_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsub_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vsub_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsub_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vsub_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsub_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsub_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vsub_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsub_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vsub_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsub_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsub_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vsub_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsub_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vsub_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsub_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsub_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vsub_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsub_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vsub_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsub_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsub_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vsub_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsub_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vsub_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vsub_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vsub_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vsub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vsub_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vsub_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vsub_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vsub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vsub_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vsub_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vsub_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vsub_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vsub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vsub_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vsub_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vsub_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vsub_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vsub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vsub_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vsub_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vsub_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vsub_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vsub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vsub_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vsub_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vsub_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vsub_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vsub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vsub_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vsub_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vsub_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vsub_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vsub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vsub_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vsub_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vsub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vsub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vsub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vsub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vsub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vsub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vsub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vsub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vsub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vsub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vsub_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vsub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vsub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vsub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vsub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vsub_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vsub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vsub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vsub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vsub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vsub_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vsub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vsub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vsub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vsub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vsub_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vsub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vsub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vsub_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vsub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vsub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vsub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vsub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vsub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vsub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vsub_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vsub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vsub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vsub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vsub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vsub_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vsub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vsub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vsub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vsub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vsub_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vsub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vsub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vsub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vsub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vsub_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vsub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vsub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vsub_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vsub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vsub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vsub_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vsub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vsub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vsub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vsub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vsub_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vsub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vsub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vsub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vsub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vsub_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vsub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vsub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vsub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vsub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vsub_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vsub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vsub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vsub_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vsub_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vsub_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vsub_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vsub_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vsub_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vsub_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vsub_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vsub_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vsub_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vsub_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vsub_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vsub_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vsub_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vsub_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vsub_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vsub_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vsub_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vsub_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vsub_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vsub_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vsub_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vsub_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vsub_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vsub_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vsub_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vsub_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vsub_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vsub_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vsub_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vsub_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vsub_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vsub_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vsub_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vsub_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vsub_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vsub_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vsub_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vsub_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vsub_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vsub_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vsub_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vsub_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vsub_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vsub_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vsub_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vsub_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vsub_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vsub_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vsub_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vsub_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vsub_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vsub_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vsub_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vsub_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vsub_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vsub_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vsub_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vsub_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vsub_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vsub_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vsub_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vsub_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vsub_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vsub_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vsub_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vsub_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vsub_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vsub_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vsub_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vsub_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vsub_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vsub_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vsub_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vsub_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vsub_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vsub_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vsub_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vsub_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vsub_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vsub_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vsub_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vsub_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vsub_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vsub_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vsub_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vsub_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vsub_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vsub_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vsub_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vsub_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vsub_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vsub_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vsub_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vsub_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vsub_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vsub_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vsub_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vsub_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vsub_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vsub_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vsub_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vsub_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vsub_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vsub_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vsub_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vsub_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vsub_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vsub_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vsub_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vsub_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vsub_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vsub_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vsub_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwadd.c b/auto-generated/policy_funcs/llvm-api-tests/vwadd.c index 545ccb4c5..1f20817e6 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwadd.c @@ -5,962 +5,1220 @@ #include -vint16mf4_t test_vwadd_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwadd_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vwadd_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vwadd_vx_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwadd_vx_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vwadd_wv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwadd_wv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vwadd_wv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vwadd_wx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwadd_wx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_wx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vwadd_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwadd_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vwadd_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vwadd_vx_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwadd_vx_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vwadd_wv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwadd_wv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vwadd_wv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vwadd_wx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwadd_wx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_wx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vwadd_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwadd_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vwadd_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwadd_vx_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwadd_vx_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vwadd_wv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwadd_wv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwadd_wx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwadd_wx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_wx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vwadd_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwadd_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vwadd_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vwadd_vx_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwadd_vx_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vwadd_wv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwadd_wv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vwadd_wx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwadd_wx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_wx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vwadd_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwadd_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vwadd_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vwadd_vx_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwadd_vx_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vwadd_wv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwadd_wv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vwadd_wx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwadd_wx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_wx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vwadd_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwadd_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vwadd_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vwadd_vx_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwadd_vx_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vwadd_wv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwadd_wv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vwadd_wx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwadd_wx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwadd_wx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vwadd_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwadd_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vwadd_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vwadd_vx_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwadd_vx_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vwadd_wv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwadd_wv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vwadd_wv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vwadd_wx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwadd_wx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vwadd_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwadd_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwadd_vx_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwadd_vx_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwadd_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vwadd_wv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwadd_wv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwadd_wx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwadd_wx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwadd_wx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vwadd_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwadd_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwadd_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vwadd_vx_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwadd_vx_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwadd_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vwadd_wv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwadd_wv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwadd_wv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vwadd_wx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwadd_wx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwadd_wx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vwadd_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwadd_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vwadd_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vwadd_vx_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwadd_vx_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwadd_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vwadd_wv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwadd_wv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vwadd_wv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vwadd_wx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwadd_wx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwadd_wx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vwadd_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwadd_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vwadd_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vwadd_vx_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwadd_vx_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwadd_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vwadd_wv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwadd_wv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vwadd_wx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwadd_wx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwadd_wx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vwadd_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwadd_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwadd_vx_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwadd_vx_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwadd_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vwadd_wv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwadd_wv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwadd_wx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwadd_wx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwadd_wx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vwadd_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwadd_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwadd_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vwadd_vx_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwadd_vx_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwadd_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vwadd_wv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwadd_wv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwadd_wv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vwadd_wx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwadd_wx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwadd_wx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vwadd_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwadd_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vwadd_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vwadd_vx_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwadd_vx_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwadd_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vwadd_wv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwadd_wv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vwadd_wv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vwadd_wx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwadd_wx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwadd_wx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vwadd_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwadd_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vwadd_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vwadd_vx_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwadd_vx_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwadd_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vwadd_wv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwadd_wv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vwadd_wx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwadd_wx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwadd_wx_i64m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vwadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwadd_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwadd_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwadd_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwadd_wv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwadd_wv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwadd_wx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwadd_wx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwadd_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwadd_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwadd_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwadd_wv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwadd_wv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwadd_wx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwadd_wx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwadd_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwadd_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwadd_wv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwadd_wv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwadd_wx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwadd_wx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwadd_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwadd_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwadd_wv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwadd_wv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwadd_wx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwadd_wx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwadd_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwadd_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwadd_wv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwadd_wv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwadd_wx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwadd_wx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwadd_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwadd_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwadd_wv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwadd_wv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwadd_wx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwadd_wx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwadd_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwadd_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwadd_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwadd_wv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwadd_wv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwadd_wx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwadd_wx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwadd_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwadd_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwadd_wv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwadd_wv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwadd_wx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwadd_wx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwadd_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwadd_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwadd_wv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwadd_wv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwadd_wx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwadd_wx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwadd_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwadd_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwadd_wv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwadd_wv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwadd_wx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwadd_wx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwadd_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwadd_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwadd_wv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwadd_wv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwadd_wx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwadd_wx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwadd_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwadd_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwadd_wv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwadd_wv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwadd_wx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwadd_wx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwadd_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwadd_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwadd_wv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwadd_wv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwadd_wx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwadd_wx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwadd_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwadd_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwadd_wv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwadd_wv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwadd_wx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwadd_wx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwadd_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwadd_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwadd_wv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwadd_wv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwadd_wx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwadd_wx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwadd_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwadd_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwadd_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwadd_wv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwadd_wv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwadd_wx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwadd_wx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwadd_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwadd_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwadd_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwadd_wv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwadd_wv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwadd_wx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwadd_wx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwadd_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwadd_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwadd_wv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwadd_wv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwadd_wx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwadd_wx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwadd_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwadd_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwadd_wv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwadd_wv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwadd_wx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwadd_wx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwadd_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwadd_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwadd_wv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwadd_wv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwadd_wx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwadd_wx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwadd_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwadd_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwadd_wv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwadd_wv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwadd_wx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwadd_wx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwadd_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwadd_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwadd_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwadd_wv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwadd_wv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwadd_wx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwadd_wx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwadd_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vwadd_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwadd_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwadd_wv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwadd_wv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwadd_wx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwadd_wx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwadd_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwadd_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwadd_wv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwadd_wv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwadd_wx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwadd_wx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwadd_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwadd_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwadd_wv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwadd_wv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwadd_wx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwadd_wx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwadd_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwadd_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwadd_wv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwadd_wv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwadd_wx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwadd_wx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwadd_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vwadd_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwadd_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwadd_wv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwadd_wv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwadd_wx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwadd_wx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwadd_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwadd_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwadd_wv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwadd_wv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwadd_wx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwadd_wx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwadd_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwadd_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwadd_wv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwadd_wv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwadd_wx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwadd_wx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwadd_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwadd_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwadd_wv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwadd_wv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwadd_wx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwadd_wx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwadd_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { return __riscv_vwadd_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwadd_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwadd_wv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwadd_wv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwadd_wx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwadd_wx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwadd_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { return __riscv_vwadd_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwadd_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwadd_wv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwadd_wv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwadd_wx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwadd_wx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwadd_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwadd_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwadd_wv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwadd_wv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwadd_wx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwadd_wx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwadd_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwadd_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwadd_wv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwadd_wv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwadd_wx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwadd_wx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwadd_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwadd_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwadd_wv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwadd_wv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwadd_wx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwadd_wx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwadd_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwadd_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwadd_wv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwadd_wv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwadd_wx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwadd_wx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwadd_wx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwadd_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwadd_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwadd_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwadd_wv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwadd_wv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwadd_wv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwadd_wx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwadd_wx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwadd_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwadd_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwadd_wv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwadd_wv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwadd_wx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwadd_wx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwadd_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwadd_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwadd_wv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwadd_wv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwadd_wx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwadd_wx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwadd_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwadd_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwadd_wv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwadd_wv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwadd_wx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwadd_wx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwadd_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwadd_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwadd_wv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwadd_wv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwadd_wx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwadd_wx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwadd_wx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwadd_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwadd_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwadd_wv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwadd_wv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwadd_wx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwadd_wx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwadd_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwadd_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwadd_wv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwadd_wv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwadd_wx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwadd_wx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwadd_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwadd_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwadd_wv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwadd_wv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwadd_wx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwadd_wx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwadd_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwadd_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwadd_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwadd_wv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwadd_wv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwadd_wv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwadd_wx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwadd_wx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwadd_wx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwaddu.c b/auto-generated/policy_funcs/llvm-api-tests/vwaddu.c index f2eeb50af..1c7d29299 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwaddu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwaddu.c @@ -5,962 +5,1323 @@ #include -vuint16mf4_t test_vwaddu_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwaddu_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwaddu_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwaddu_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwaddu_wv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwaddu_wv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwaddu_wv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwaddu_wx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwaddu_wx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwaddu_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwaddu_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwaddu_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwaddu_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwaddu_wv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwaddu_wv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwaddu_wv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwaddu_wx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwaddu_wx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwaddu_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwaddu_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwaddu_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwaddu_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwaddu_wv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwaddu_wv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwaddu_wv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwaddu_wx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwaddu_wx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwaddu_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwaddu_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwaddu_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwaddu_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwaddu_wv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwaddu_wv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwaddu_wv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwaddu_wx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwaddu_wx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwaddu_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwaddu_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwaddu_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwaddu_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwaddu_wv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwaddu_wv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwaddu_wv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwaddu_wx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwaddu_wx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwaddu_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwaddu_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwaddu_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwaddu_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwaddu_wv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwaddu_wv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwaddu_wv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwaddu_wx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwaddu_wx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwaddu_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwaddu_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwaddu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwaddu_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwaddu_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwaddu_wv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwaddu_wv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwaddu_wv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwaddu_wx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwaddu_wx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwaddu_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwaddu_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwaddu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwaddu_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwaddu_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwaddu_wv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwaddu_wv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwaddu_wv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwaddu_wx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwaddu_wx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwaddu_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwaddu_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwaddu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwaddu_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwaddu_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwaddu_wv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwaddu_wv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwaddu_wv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwaddu_wx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwaddu_wx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwaddu_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwaddu_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwaddu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwaddu_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwaddu_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwaddu_wv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwaddu_wv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwaddu_wv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwaddu_wx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwaddu_wx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwaddu_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwaddu_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwaddu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwaddu_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwaddu_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwaddu_wv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwaddu_wv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwaddu_wv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwaddu_wx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwaddu_wx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwaddu_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwaddu_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwaddu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwaddu_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwaddu_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwaddu_wv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwaddu_wv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwaddu_wv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwaddu_wx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwaddu_wx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwaddu_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwaddu_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwaddu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwaddu_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwaddu_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwaddu_wv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwaddu_wv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwaddu_wv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwaddu_wx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwaddu_wx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwaddu_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwaddu_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwaddu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwaddu_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwaddu_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwaddu_wv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwaddu_wv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwaddu_wv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwaddu_wx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwaddu_wx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwaddu_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwaddu_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwaddu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwaddu_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwaddu_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwaddu_wv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwaddu_wv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwaddu_wv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwaddu_wx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwaddu_wx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwaddu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwaddu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwaddu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwaddu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwaddu_wv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwaddu_wv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwaddu_wx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwaddu_wx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwaddu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwaddu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwaddu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwaddu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwaddu_wv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwaddu_wv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwaddu_wx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwaddu_wx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwaddu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwaddu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwaddu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwaddu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwaddu_wv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwaddu_wv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwaddu_wx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwaddu_wx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwaddu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwaddu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwaddu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwaddu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwaddu_wv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwaddu_wv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwaddu_wx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwaddu_wx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwaddu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwaddu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwaddu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwaddu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwaddu_wv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwaddu_wv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwaddu_wx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwaddu_wx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwaddu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwaddu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwaddu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwaddu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwaddu_wv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwaddu_wv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwaddu_wx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwaddu_wx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwaddu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwaddu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwaddu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwaddu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwaddu_wv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwaddu_wv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwaddu_wx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwaddu_wx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwaddu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwaddu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwaddu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwaddu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwaddu_wv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwaddu_wv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwaddu_wx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwaddu_wx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwaddu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwaddu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwaddu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwaddu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwaddu_wv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwaddu_wv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwaddu_wx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwaddu_wx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwaddu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwaddu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwaddu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwaddu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwaddu_wv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwaddu_wv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwaddu_wx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwaddu_wx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwaddu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwaddu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwaddu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwaddu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwaddu_wv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwaddu_wv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwaddu_wx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwaddu_wx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwaddu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwaddu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwaddu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwaddu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwaddu_wv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwaddu_wv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwaddu_wx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwaddu_wx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwaddu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwaddu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwaddu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwaddu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwaddu_wv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwaddu_wv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwaddu_wx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwaddu_wx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwaddu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwaddu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwaddu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwaddu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwaddu_wv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwaddu_wv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwaddu_wx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwaddu_wx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwaddu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwaddu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwaddu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwaddu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwaddu_wv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwaddu_wv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwaddu_wx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwaddu_wx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwaddu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwaddu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwaddu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwaddu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwaddu_wv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwaddu_wv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwaddu_wx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwaddu_wx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwaddu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwaddu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwaddu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwaddu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwaddu_wv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwaddu_wv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwaddu_wx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwaddu_wx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwaddu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwaddu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwaddu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwaddu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwaddu_wv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwaddu_wv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwaddu_wx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwaddu_wx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwaddu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwaddu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwaddu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwaddu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwaddu_wv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwaddu_wv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwaddu_wx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwaddu_wx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwaddu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwaddu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwaddu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwaddu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwaddu_wv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwaddu_wv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwaddu_wx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwaddu_wx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwaddu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwaddu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwaddu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwaddu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwaddu_wv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwaddu_wv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwaddu_wx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwaddu_wx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwaddu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwaddu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwaddu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwaddu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwaddu_wv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwaddu_wv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwaddu_wx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwaddu_wx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwaddu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwaddu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwaddu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwaddu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwaddu_wv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwaddu_wv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwaddu_wx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwaddu_wx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwaddu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwaddu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwaddu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwaddu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwaddu_wv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwaddu_wv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwaddu_wx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwaddu_wx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwaddu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwaddu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwaddu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwaddu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwaddu_wv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwaddu_wv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwaddu_wx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwaddu_wx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwaddu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwaddu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwaddu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwaddu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwaddu_wv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwaddu_wv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwaddu_wx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwaddu_wx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwaddu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwaddu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwaddu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwaddu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwaddu_wv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwaddu_wv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwaddu_wx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwaddu_wx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwaddu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwaddu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwaddu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwaddu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwaddu_wv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwaddu_wv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwaddu_wx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwaddu_wx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwaddu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwaddu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwaddu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwaddu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwaddu_wv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwaddu_wv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwaddu_wx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwaddu_wx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwaddu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwaddu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwaddu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwaddu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwaddu_wv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwaddu_wv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwaddu_wx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwaddu_wx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwaddu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwaddu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwaddu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwaddu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwaddu_wv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwaddu_wv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwaddu_wx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwaddu_wx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwaddu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwaddu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwaddu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwaddu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwaddu_wv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwaddu_wv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwaddu_wx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwaddu_wx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwaddu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwaddu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwaddu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwaddu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwaddu_wv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwaddu_wv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwaddu_wx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwaddu_wx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwaddu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwaddu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwaddu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwaddu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwaddu_wv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwaddu_wv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwaddu_wx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwaddu_wx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwaddu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwaddu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwaddu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwaddu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwaddu_wv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwaddu_wv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwaddu_wx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwaddu_wx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwaddu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwaddu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwaddu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwaddu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwaddu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwaddu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwaddu_wv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwaddu_wv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwaddu_wx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwaddu_wx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwaddu_wx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwaddu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwaddu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwaddu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwaddu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwaddu_wv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwaddu_wv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwaddu_wx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwaddu_wx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwaddu_wx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwaddu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwaddu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwaddu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwaddu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwaddu_wv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwaddu_wv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwaddu_wx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwaddu_wx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwaddu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwaddu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwaddu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwaddu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwaddu_wv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwaddu_wv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwaddu_wx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwaddu_wx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwaddu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwaddu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwaddu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwaddu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwaddu_wv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwaddu_wv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwaddu_wx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwaddu_wx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwaddu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwaddu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwaddu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwaddu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwaddu_wv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwaddu_wv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwaddu_wx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwaddu_wx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwaddu_wx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwaddu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwaddu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwaddu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwaddu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwaddu_wv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwaddu_wv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwaddu_wx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwaddu_wx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwaddu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwaddu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwaddu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwaddu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwaddu_wv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwaddu_wv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwaddu_wx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwaddu_wx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwaddu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwaddu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwaddu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwaddu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwaddu_wv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwaddu_wv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwaddu_wx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwaddu_wx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwaddu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwaddu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwaddu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwaddu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwaddu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwaddu_wv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwaddu_wv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwaddu_wv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwaddu_wx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwaddu_wx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwaddu_wx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwcvt.c b/auto-generated/policy_funcs/llvm-api-tests/vwcvt.c index 361a28ac9..4bf476f55 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwcvt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwcvt.c @@ -5,11 +5,13 @@ #include -vint16mf4_t test_vwcvt_x_x_v_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwcvt_x_x_v_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i16mf4_tu(vd, vs2, vl); } -vint16mf2_t test_vwcvt_x_x_v_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwcvt_x_x_v_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i16mf2_tu(vd, vs2, vl); } @@ -29,11 +31,13 @@ vint16m8_t test_vwcvt_x_x_v_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16m8_tu(vd, vs2, vl); } -vint32mf2_t test_vwcvt_x_x_v_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwcvt_x_x_v_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i32mf2_tu(vd, vs2, vl); } -vint32m1_t test_vwcvt_x_x_v_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwcvt_x_x_v_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i32m1_tu(vd, vs2, vl); } @@ -49,7 +53,8 @@ vint32m8_t test_vwcvt_x_x_v_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m8_tu(vd, vs2, vl); } -vint64m1_t test_vwcvt_x_x_v_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwcvt_x_x_v_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i64m1_tu(vd, vs2, vl); } @@ -65,182 +70,227 @@ vint64m8_t test_vwcvt_x_x_v_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m8_tu(vd, vs2, vl); } -vint16mf4_t test_vwcvt_x_x_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwcvt_x_x_v_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16mf4_tum(vm, vd, vs2, vl); } -vint16mf2_t test_vwcvt_x_x_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwcvt_x_x_v_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16mf2_tum(vm, vd, vs2, vl); } -vint16m1_t test_vwcvt_x_x_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwcvt_x_x_v_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16m1_tum(vm, vd, vs2, vl); } -vint16m2_t test_vwcvt_x_x_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwcvt_x_x_v_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i16m2_tum(vm, vd, vs2, vl); } -vint16m4_t test_vwcvt_x_x_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwcvt_x_x_v_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i16m4_tum(vm, vd, vs2, vl); } -vint16m8_t test_vwcvt_x_x_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwcvt_x_x_v_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i16m8_tum(vm, vd, vs2, vl); } -vint32mf2_t test_vwcvt_x_x_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwcvt_x_x_v_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32mf2_tum(vm, vd, vs2, vl); } -vint32m1_t test_vwcvt_x_x_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwcvt_x_x_v_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m1_tum(vm, vd, vs2, vl); } -vint32m2_t test_vwcvt_x_x_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwcvt_x_x_v_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m2_tum(vm, vd, vs2, vl); } -vint32m4_t test_vwcvt_x_x_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwcvt_x_x_v_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m4_tum(vm, vd, vs2, vl); } -vint32m8_t test_vwcvt_x_x_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwcvt_x_x_v_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m8_tum(vm, vd, vs2, vl); } -vint64m1_t test_vwcvt_x_x_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwcvt_x_x_v_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m1_tum(vm, vd, vs2, vl); } -vint64m2_t test_vwcvt_x_x_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwcvt_x_x_v_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m2_tum(vm, vd, vs2, vl); } -vint64m4_t test_vwcvt_x_x_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwcvt_x_x_v_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m4_tum(vm, vd, vs2, vl); } -vint64m8_t test_vwcvt_x_x_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwcvt_x_x_v_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m8_tum(vm, vd, vs2, vl); } -vint16mf4_t test_vwcvt_x_x_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwcvt_x_x_v_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16mf4_tumu(vm, vd, vs2, vl); } -vint16mf2_t test_vwcvt_x_x_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwcvt_x_x_v_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16mf2_tumu(vm, vd, vs2, vl); } -vint16m1_t test_vwcvt_x_x_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwcvt_x_x_v_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16m1_tumu(vm, vd, vs2, vl); } -vint16m2_t test_vwcvt_x_x_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwcvt_x_x_v_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint8m1_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16m2_tumu(vm, vd, vs2, vl); } -vint16m4_t test_vwcvt_x_x_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwcvt_x_x_v_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint8m2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16m4_tumu(vm, vd, vs2, vl); } -vint16m8_t test_vwcvt_x_x_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwcvt_x_x_v_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint8m4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16m8_tumu(vm, vd, vs2, vl); } -vint32mf2_t test_vwcvt_x_x_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwcvt_x_x_v_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32mf2_tumu(vm, vd, vs2, vl); } -vint32m1_t test_vwcvt_x_x_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwcvt_x_x_v_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m1_tumu(vm, vd, vs2, vl); } -vint32m2_t test_vwcvt_x_x_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwcvt_x_x_v_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m2_tumu(vm, vd, vs2, vl); } -vint32m4_t test_vwcvt_x_x_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwcvt_x_x_v_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint16m2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m4_tumu(vm, vd, vs2, vl); } -vint32m8_t test_vwcvt_x_x_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwcvt_x_x_v_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint16m4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m8_tumu(vm, vd, vs2, vl); } -vint64m1_t test_vwcvt_x_x_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwcvt_x_x_v_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m1_tumu(vm, vd, vs2, vl); } -vint64m2_t test_vwcvt_x_x_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwcvt_x_x_v_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m2_tumu(vm, vd, vs2, vl); } -vint64m4_t test_vwcvt_x_x_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwcvt_x_x_v_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m4_tumu(vm, vd, vs2, vl); } -vint64m8_t test_vwcvt_x_x_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwcvt_x_x_v_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint32m4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m8_tumu(vm, vd, vs2, vl); } -vint16mf4_t test_vwcvt_x_x_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwcvt_x_x_v_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16mf4_mu(vm, vd, vs2, vl); } -vint16mf2_t test_vwcvt_x_x_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwcvt_x_x_v_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16mf2_mu(vm, vd, vs2, vl); } -vint16m1_t test_vwcvt_x_x_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwcvt_x_x_v_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i16m1_mu(vm, vd, vs2, vl); } -vint16m2_t test_vwcvt_x_x_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwcvt_x_x_v_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i16m2_mu(vm, vd, vs2, vl); } -vint16m4_t test_vwcvt_x_x_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwcvt_x_x_v_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i16m4_mu(vm, vd, vs2, vl); } -vint16m8_t test_vwcvt_x_x_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwcvt_x_x_v_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i16m8_mu(vm, vd, vs2, vl); } -vint32mf2_t test_vwcvt_x_x_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwcvt_x_x_v_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32mf2_mu(vm, vd, vs2, vl); } -vint32m1_t test_vwcvt_x_x_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwcvt_x_x_v_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m1_mu(vm, vd, vs2, vl); } -vint32m2_t test_vwcvt_x_x_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwcvt_x_x_v_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i32m2_mu(vm, vd, vs2, vl); } -vint32m4_t test_vwcvt_x_x_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwcvt_x_x_v_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i32m4_mu(vm, vd, vs2, vl); } -vint32m8_t test_vwcvt_x_x_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwcvt_x_x_v_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i32m8_mu(vm, vd, vs2, vl); } -vint64m1_t test_vwcvt_x_x_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwcvt_x_x_v_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m1_mu(vm, vd, vs2, vl); } -vint64m2_t test_vwcvt_x_x_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwcvt_x_x_v_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m2_mu(vm, vd, vs2, vl); } -vint64m4_t test_vwcvt_x_x_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwcvt_x_x_v_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs2, size_t vl) { return __riscv_vwcvt_x_x_v_i64m4_mu(vm, vd, vs2, vl); } -vint64m8_t test_vwcvt_x_x_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwcvt_x_x_v_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + size_t vl) { return __riscv_vwcvt_x_x_v_i64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwcvtu.c b/auto-generated/policy_funcs/llvm-api-tests/vwcvtu.c index f00414526..a828bf41b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwcvtu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwcvtu.c @@ -5,242 +5,302 @@ #include -vuint16mf4_t test_vwcvtu_x_x_v_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwcvtu_x_x_v_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vwcvtu_x_x_v_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwcvtu_x_x_v_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vwcvtu_x_x_v_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwcvtu_x_x_v_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u16m1_tu(vd, vs2, vl); } -vuint16m2_t test_vwcvtu_x_x_v_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwcvtu_x_x_v_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u16m2_tu(vd, vs2, vl); } -vuint16m4_t test_vwcvtu_x_x_v_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwcvtu_x_x_v_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u16m4_tu(vd, vs2, vl); } -vuint16m8_t test_vwcvtu_x_x_v_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwcvtu_x_x_v_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vwcvtu_x_x_v_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwcvtu_x_x_v_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vwcvtu_x_x_v_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwcvtu_x_x_v_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vwcvtu_x_x_v_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwcvtu_x_x_v_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vwcvtu_x_x_v_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwcvtu_x_x_v_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vwcvtu_x_x_v_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwcvtu_x_x_v_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vwcvtu_x_x_v_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwcvtu_x_x_v_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vwcvtu_x_x_v_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwcvtu_x_x_v_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vwcvtu_x_x_v_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwcvtu_x_x_v_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vwcvtu_x_x_v_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwcvtu_x_x_v_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vwcvtu_x_x_v_u64m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vwcvtu_x_x_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwcvtu_x_x_v_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vwcvtu_x_x_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwcvtu_x_x_v_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vwcvtu_x_x_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwcvtu_x_x_v_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vwcvtu_x_x_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwcvtu_x_x_v_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vwcvtu_x_x_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwcvtu_x_x_v_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vwcvtu_x_x_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwcvtu_x_x_v_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vwcvtu_x_x_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwcvtu_x_x_v_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vwcvtu_x_x_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwcvtu_x_x_v_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vwcvtu_x_x_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwcvtu_x_x_v_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vwcvtu_x_x_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwcvtu_x_x_v_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vwcvtu_x_x_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwcvtu_x_x_v_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vwcvtu_x_x_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwcvtu_x_x_v_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vwcvtu_x_x_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwcvtu_x_x_v_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vwcvtu_x_x_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwcvtu_x_x_v_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vwcvtu_x_x_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwcvtu_x_x_v_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vwcvtu_x_x_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwcvtu_x_x_v_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vwcvtu_x_x_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwcvtu_x_x_v_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vwcvtu_x_x_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwcvtu_x_x_v_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vwcvtu_x_x_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwcvtu_x_x_v_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vwcvtu_x_x_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwcvtu_x_x_v_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vwcvtu_x_x_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwcvtu_x_x_v_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vwcvtu_x_x_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwcvtu_x_x_v_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vwcvtu_x_x_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwcvtu_x_x_v_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vwcvtu_x_x_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwcvtu_x_x_v_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vwcvtu_x_x_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwcvtu_x_x_v_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vwcvtu_x_x_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwcvtu_x_x_v_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vwcvtu_x_x_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwcvtu_x_x_v_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vwcvtu_x_x_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwcvtu_x_x_v_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vwcvtu_x_x_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwcvtu_x_x_v_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vwcvtu_x_x_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwcvtu_x_x_v_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vwcvtu_x_x_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwcvtu_x_x_v_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vwcvtu_x_x_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwcvtu_x_x_v_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vwcvtu_x_x_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwcvtu_x_x_v_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vwcvtu_x_x_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwcvtu_x_x_v_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vwcvtu_x_x_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwcvtu_x_x_v_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vwcvtu_x_x_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwcvtu_x_x_v_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vwcvtu_x_x_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwcvtu_x_x_v_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vwcvtu_x_x_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwcvtu_x_x_v_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vwcvtu_x_x_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwcvtu_x_x_v_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vwcvtu_x_x_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwcvtu_x_x_v_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vwcvtu_x_x_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwcvtu_x_x_v_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vwcvtu_x_x_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwcvtu_x_x_v_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vwcvtu_x_x_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwcvtu_x_x_v_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vwcvtu_x_x_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwcvtu_x_x_v_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vwcvtu_x_x_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwcvtu_x_x_v_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vwcvtu_x_x_v_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vwmacc.c index 891185d56..f88c26dcd 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmacc.c @@ -6,482 +6,620 @@ #include -vint16mf4_t test_vwmacc_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmacc_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16mf4_tu(vd, vs1, vs2, vl); } -vint16mf4_t test_vwmacc_vx_i16mf4_tu(vint16mf4_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmacc_vx_i16mf4_tu(vint16mf4_t vd, int8_t rs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i16mf4_tu(vd, rs1, vs2, vl); } -vint16mf2_t test_vwmacc_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmacc_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16mf2_tu(vd, vs1, vs2, vl); } -vint16mf2_t test_vwmacc_vx_i16mf2_tu(vint16mf2_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmacc_vx_i16mf2_tu(vint16mf2_t vd, int8_t rs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i16mf2_tu(vd, rs1, vs2, vl); } -vint16m1_t test_vwmacc_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmacc_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m1_tu(vd, vs1, vs2, vl); } -vint16m1_t test_vwmacc_vx_i16m1_tu(vint16m1_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmacc_vx_i16m1_tu(vint16m1_t vd, int8_t rs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i16m1_tu(vd, rs1, vs2, vl); } -vint16m2_t test_vwmacc_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmacc_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs1, vint8m1_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16m2_tu(vd, vs1, vs2, vl); } -vint16m2_t test_vwmacc_vx_i16m2_tu(vint16m2_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmacc_vx_i16m2_tu(vint16m2_t vd, int8_t rs1, vint8m1_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i16m2_tu(vd, rs1, vs2, vl); } -vint16m4_t test_vwmacc_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmacc_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs1, vint8m2_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16m4_tu(vd, vs1, vs2, vl); } -vint16m4_t test_vwmacc_vx_i16m4_tu(vint16m4_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmacc_vx_i16m4_tu(vint16m4_t vd, int8_t rs1, vint8m2_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i16m4_tu(vd, rs1, vs2, vl); } -vint16m8_t test_vwmacc_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmacc_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs1, vint8m4_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16m8_tu(vd, vs1, vs2, vl); } -vint16m8_t test_vwmacc_vx_i16m8_tu(vint16m8_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmacc_vx_i16m8_tu(vint16m8_t vd, int8_t rs1, vint8m4_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i16m8_tu(vd, rs1, vs2, vl); } -vint32mf2_t test_vwmacc_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmacc_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32mf2_tu(vd, vs1, vs2, vl); } -vint32mf2_t test_vwmacc_vx_i32mf2_tu(vint32mf2_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmacc_vx_i32mf2_tu(vint32mf2_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32mf2_tu(vd, rs1, vs2, vl); } -vint32m1_t test_vwmacc_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmacc_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m1_tu(vd, vs1, vs2, vl); } -vint32m1_t test_vwmacc_vx_i32m1_tu(vint32m1_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmacc_vx_i32m1_tu(vint32m1_t vd, int16_t rs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i32m1_tu(vd, rs1, vs2, vl); } -vint32m2_t test_vwmacc_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmacc_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m2_tu(vd, vs1, vs2, vl); } -vint32m2_t test_vwmacc_vx_i32m2_tu(vint32m2_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmacc_vx_i32m2_tu(vint32m2_t vd, int16_t rs1, vint16m1_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i32m2_tu(vd, rs1, vs2, vl); } -vint32m4_t test_vwmacc_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmacc_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m4_tu(vd, vs1, vs2, vl); } -vint32m4_t test_vwmacc_vx_i32m4_tu(vint32m4_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmacc_vx_i32m4_tu(vint32m4_t vd, int16_t rs1, vint16m2_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i32m4_tu(vd, rs1, vs2, vl); } -vint32m8_t test_vwmacc_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmacc_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m8_tu(vd, vs1, vs2, vl); } -vint32m8_t test_vwmacc_vx_i32m8_tu(vint32m8_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmacc_vx_i32m8_tu(vint32m8_t vd, int16_t rs1, vint16m4_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i32m8_tu(vd, rs1, vs2, vl); } -vint64m1_t test_vwmacc_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmacc_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m1_tu(vd, vs1, vs2, vl); } -vint64m1_t test_vwmacc_vx_i64m1_tu(vint64m1_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmacc_vx_i64m1_tu(vint64m1_t vd, int32_t rs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i64m1_tu(vd, rs1, vs2, vl); } -vint64m2_t test_vwmacc_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmacc_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m2_tu(vd, vs1, vs2, vl); } -vint64m2_t test_vwmacc_vx_i64m2_tu(vint64m2_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmacc_vx_i64m2_tu(vint64m2_t vd, int32_t rs1, vint32m1_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i64m2_tu(vd, rs1, vs2, vl); } -vint64m4_t test_vwmacc_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmacc_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m4_tu(vd, vs1, vs2, vl); } -vint64m4_t test_vwmacc_vx_i64m4_tu(vint64m4_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmacc_vx_i64m4_tu(vint64m4_t vd, int32_t rs1, vint32m2_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i64m4_tu(vd, rs1, vs2, vl); } -vint64m8_t test_vwmacc_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmacc_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m8_tu(vd, vs1, vs2, vl); } -vint64m8_t test_vwmacc_vx_i64m8_tu(vint64m8_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmacc_vx_i64m8_tu(vint64m8_t vd, int32_t rs1, vint32m4_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i64m8_tu(vd, rs1, vs2, vl); } -vint16mf4_t test_vwmacc_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmacc_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16mf4_tum(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vwmacc_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmacc_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16mf4_tum(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmacc_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmacc_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16mf2_tum(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vwmacc_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmacc_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16mf2_tum(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmacc_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmacc_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m1_tum(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vwmacc_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmacc_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m1_tum(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmacc_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmacc_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m2_tum(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vwmacc_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmacc_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m2_tum(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmacc_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmacc_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m4_tum(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vwmacc_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmacc_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m4_tum(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmacc_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmacc_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m8_tum(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vwmacc_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmacc_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m8_tum(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmacc_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmacc_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i32mf2_tum(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vwmacc_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmacc_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32mf2_tum(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmacc_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmacc_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i32m1_tum(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vwmacc_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmacc_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m1_tum(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmacc_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmacc_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m2_tum(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vwmacc_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmacc_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m2_tum(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmacc_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmacc_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m4_tum(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vwmacc_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmacc_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m4_tum(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmacc_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmacc_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m8_tum(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vwmacc_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmacc_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m8_tum(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmacc_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmacc_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i64m1_tum(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vwmacc_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmacc_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m1_tum(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmacc_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmacc_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m2_tum(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vwmacc_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmacc_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m2_tum(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmacc_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmacc_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m4_tum(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vwmacc_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmacc_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m4_tum(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmacc_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmacc_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m8_tum(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vwmacc_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmacc_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m8_tum(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vwmacc_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmacc_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16mf4_tumu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vwmacc_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmacc_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16mf4_tumu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmacc_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmacc_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16mf2_tumu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vwmacc_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmacc_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16mf2_tumu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmacc_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmacc_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16m1_tumu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vwmacc_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmacc_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m1_tumu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmacc_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmacc_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m2_tumu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vwmacc_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmacc_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m2_tumu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmacc_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmacc_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m4_tumu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vwmacc_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmacc_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m4_tumu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmacc_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmacc_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m8_tumu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vwmacc_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmacc_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m8_tumu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmacc_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmacc_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i32mf2_tumu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vwmacc_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmacc_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + int16_t rs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vwmacc_vx_i32mf2_tumu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmacc_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmacc_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i32m1_tumu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vwmacc_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmacc_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m1_tumu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmacc_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmacc_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs1, vint16m1_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i32m2_tumu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vwmacc_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmacc_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m2_tumu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmacc_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmacc_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m4_tumu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vwmacc_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmacc_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m4_tumu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmacc_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmacc_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m8_tumu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vwmacc_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmacc_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m8_tumu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmacc_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmacc_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i64m1_tumu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vwmacc_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmacc_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m1_tumu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmacc_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmacc_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs1, vint32m1_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i64m2_tumu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vwmacc_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmacc_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m2_tumu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmacc_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmacc_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs1, vint32m2_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i64m4_tumu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vwmacc_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmacc_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m4_tumu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmacc_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmacc_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m8_tumu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vwmacc_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmacc_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m8_tumu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vwmacc_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmacc_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16mf4_mu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vwmacc_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int8_t rs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmacc_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16mf4_mu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmacc_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmacc_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i16mf2_mu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vwmacc_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int8_t rs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmacc_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16mf2_mu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmacc_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmacc_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m1_mu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vwmacc_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int8_t rs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmacc_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m1_mu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmacc_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmacc_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m2_mu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vwmacc_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int8_t rs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmacc_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m2_mu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmacc_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmacc_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m4_mu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vwmacc_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int8_t rs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmacc_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m4_mu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmacc_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmacc_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i16m8_mu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vwmacc_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int8_t rs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmacc_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i16m8_mu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmacc_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmacc_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vwmacc_vv_i32mf2_mu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vwmacc_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int16_t rs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmacc_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32mf2_mu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmacc_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmacc_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m1_mu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vwmacc_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int16_t rs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmacc_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m1_mu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmacc_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmacc_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m2_mu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vwmacc_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int16_t rs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmacc_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m2_mu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmacc_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmacc_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m4_mu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vwmacc_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int16_t rs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmacc_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m4_mu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmacc_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmacc_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i32m8_mu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vwmacc_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int16_t rs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmacc_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i32m8_mu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmacc_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmacc_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m1_mu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vwmacc_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int32_t rs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmacc_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m1_mu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmacc_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmacc_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m2_mu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vwmacc_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int32_t rs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmacc_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m2_mu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmacc_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmacc_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m4_mu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vwmacc_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int32_t rs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmacc_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m4_mu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmacc_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmacc_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmacc_vv_i64m8_mu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vwmacc_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int32_t rs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmacc_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmacc_vx_i64m8_mu(vm, vd, rs1, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c b/auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c index e497c86c7..072b9ba12 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c @@ -6,482 +6,645 @@ #include -vint16mf4_t test_vwmaccsu_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccsu_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16mf4_tu(vd, vs1, vs2, vl); } -vint16mf4_t test_vwmaccsu_vx_i16mf4_tu(vint16mf4_t vd, int8_t rs1, vuint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccsu_vx_i16mf4_tu(vint16mf4_t vd, int8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16mf4_tu(vd, rs1, vs2, vl); } -vint16mf2_t test_vwmaccsu_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccsu_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16mf2_tu(vd, vs1, vs2, vl); } -vint16mf2_t test_vwmaccsu_vx_i16mf2_tu(vint16mf2_t vd, int8_t rs1, vuint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccsu_vx_i16mf2_tu(vint16mf2_t vd, int8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16mf2_tu(vd, rs1, vs2, vl); } -vint16m1_t test_vwmaccsu_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccsu_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m1_tu(vd, vs1, vs2, vl); } -vint16m1_t test_vwmaccsu_vx_i16m1_tu(vint16m1_t vd, int8_t rs1, vuint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccsu_vx_i16m1_tu(vint16m1_t vd, int8_t rs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i16m1_tu(vd, rs1, vs2, vl); } -vint16m2_t test_vwmaccsu_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccsu_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m2_tu(vd, vs1, vs2, vl); } -vint16m2_t test_vwmaccsu_vx_i16m2_tu(vint16m2_t vd, int8_t rs1, vuint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccsu_vx_i16m2_tu(vint16m2_t vd, int8_t rs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i16m2_tu(vd, rs1, vs2, vl); } -vint16m4_t test_vwmaccsu_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccsu_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m4_tu(vd, vs1, vs2, vl); } -vint16m4_t test_vwmaccsu_vx_i16m4_tu(vint16m4_t vd, int8_t rs1, vuint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccsu_vx_i16m4_tu(vint16m4_t vd, int8_t rs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i16m4_tu(vd, rs1, vs2, vl); } -vint16m8_t test_vwmaccsu_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccsu_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m8_tu(vd, vs1, vs2, vl); } -vint16m8_t test_vwmaccsu_vx_i16m8_tu(vint16m8_t vd, int8_t rs1, vuint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccsu_vx_i16m8_tu(vint16m8_t vd, int8_t rs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i16m8_tu(vd, rs1, vs2, vl); } -vint32mf2_t test_vwmaccsu_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccsu_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i32mf2_tu(vd, vs1, vs2, vl); } -vint32mf2_t test_vwmaccsu_vx_i32mf2_tu(vint32mf2_t vd, int16_t rs1, vuint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccsu_vx_i32mf2_tu(vint32mf2_t vd, int16_t rs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32mf2_tu(vd, rs1, vs2, vl); } -vint32m1_t test_vwmaccsu_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccsu_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i32m1_tu(vd, vs1, vs2, vl); } -vint32m1_t test_vwmaccsu_vx_i32m1_tu(vint32m1_t vd, int16_t rs1, vuint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccsu_vx_i32m1_tu(vint32m1_t vd, int16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m1_tu(vd, rs1, vs2, vl); } -vint32m2_t test_vwmaccsu_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccsu_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i32m2_tu(vd, vs1, vs2, vl); } -vint32m2_t test_vwmaccsu_vx_i32m2_tu(vint32m2_t vd, int16_t rs1, vuint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccsu_vx_i32m2_tu(vint32m2_t vd, int16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m2_tu(vd, rs1, vs2, vl); } -vint32m4_t test_vwmaccsu_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccsu_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i32m4_tu(vd, vs1, vs2, vl); } -vint32m4_t test_vwmaccsu_vx_i32m4_tu(vint32m4_t vd, int16_t rs1, vuint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccsu_vx_i32m4_tu(vint32m4_t vd, int16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m4_tu(vd, rs1, vs2, vl); } -vint32m8_t test_vwmaccsu_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccsu_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i32m8_tu(vd, vs1, vs2, vl); } -vint32m8_t test_vwmaccsu_vx_i32m8_tu(vint32m8_t vd, int16_t rs1, vuint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccsu_vx_i32m8_tu(vint32m8_t vd, int16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m8_tu(vd, rs1, vs2, vl); } -vint64m1_t test_vwmaccsu_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccsu_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i64m1_tu(vd, vs1, vs2, vl); } -vint64m1_t test_vwmaccsu_vx_i64m1_tu(vint64m1_t vd, int32_t rs1, vuint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccsu_vx_i64m1_tu(vint64m1_t vd, int32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m1_tu(vd, rs1, vs2, vl); } -vint64m2_t test_vwmaccsu_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccsu_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i64m2_tu(vd, vs1, vs2, vl); } -vint64m2_t test_vwmaccsu_vx_i64m2_tu(vint64m2_t vd, int32_t rs1, vuint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccsu_vx_i64m2_tu(vint64m2_t vd, int32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m2_tu(vd, rs1, vs2, vl); } -vint64m4_t test_vwmaccsu_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccsu_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i64m4_tu(vd, vs1, vs2, vl); } -vint64m4_t test_vwmaccsu_vx_i64m4_tu(vint64m4_t vd, int32_t rs1, vuint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccsu_vx_i64m4_tu(vint64m4_t vd, int32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m4_tu(vd, rs1, vs2, vl); } -vint64m8_t test_vwmaccsu_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccsu_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i64m8_tu(vd, vs1, vs2, vl); } -vint64m8_t test_vwmaccsu_vx_i64m8_tu(vint64m8_t vd, int32_t rs1, vuint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccsu_vx_i64m8_tu(vint64m8_t vd, int32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m8_tu(vd, rs1, vs2, vl); } -vint16mf4_t test_vwmaccsu_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccsu_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16mf4_tum(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vwmaccsu_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, int8_t rs1, vuint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccsu_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + int8_t rs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i16mf4_tum(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmaccsu_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccsu_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16mf2_tum(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vwmaccsu_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, int8_t rs1, vuint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccsu_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + int8_t rs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i16mf2_tum(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmaccsu_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccsu_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16m1_tum(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vwmaccsu_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int8_t rs1, vuint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccsu_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, int8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m1_tum(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmaccsu_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccsu_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m2_tum(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vwmaccsu_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int8_t rs1, vuint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccsu_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, int8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m2_tum(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmaccsu_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccsu_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m4_tum(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vwmaccsu_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int8_t rs1, vuint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccsu_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, int8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m4_tum(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmaccsu_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccsu_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m8_tum(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vwmaccsu_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int8_t rs1, vuint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccsu_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, int8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m8_tum(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmaccsu_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccsu_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32mf2_tum(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vwmaccsu_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, int16_t rs1, vuint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccsu_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + int16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i32mf2_tum(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmaccsu_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccsu_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m1_tum(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vwmaccsu_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int16_t rs1, vuint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccsu_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, int16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m1_tum(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmaccsu_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccsu_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m2_tum(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vwmaccsu_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int16_t rs1, vuint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccsu_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, int16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m2_tum(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmaccsu_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccsu_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, + vint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m4_tum(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vwmaccsu_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int16_t rs1, vuint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccsu_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, int16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m4_tum(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmaccsu_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccsu_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, + vint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m8_tum(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vwmaccsu_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int16_t rs1, vuint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccsu_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, int16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m8_tum(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmaccsu_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccsu_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m1_tum(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vwmaccsu_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int32_t rs1, vuint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccsu_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, int32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m1_tum(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmaccsu_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccsu_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m2_tum(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vwmaccsu_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int32_t rs1, vuint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccsu_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, int32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m2_tum(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmaccsu_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccsu_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m4_tum(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vwmaccsu_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int32_t rs1, vuint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccsu_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, int32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m4_tum(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmaccsu_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccsu_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, + vint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m8_tum(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vwmaccsu_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int32_t rs1, vuint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccsu_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, int32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m8_tum(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vwmaccsu_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccsu_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16mf4_tumu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vwmaccsu_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, int8_t rs1, vuint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccsu_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + int8_t rs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i16mf4_tumu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmaccsu_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccsu_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16mf2_tumu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vwmaccsu_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, int8_t rs1, vuint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccsu_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + int8_t rs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i16mf2_tumu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmaccsu_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccsu_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16m1_tumu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vwmaccsu_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int8_t rs1, vuint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccsu_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, int8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m1_tumu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmaccsu_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccsu_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, + vint8m1_t vs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16m2_tumu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vwmaccsu_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int8_t rs1, vuint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccsu_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, int8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m2_tumu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmaccsu_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccsu_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, + vint8m2_t vs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16m4_tumu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vwmaccsu_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int8_t rs1, vuint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccsu_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, int8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m4_tumu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmaccsu_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccsu_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, + vint8m4_t vs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16m8_tumu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vwmaccsu_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int8_t rs1, vuint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccsu_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, int8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m8_tumu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmaccsu_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccsu_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32mf2_tumu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vwmaccsu_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, int16_t rs1, vuint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccsu_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + int16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i32mf2_tumu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmaccsu_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccsu_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m1_tumu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vwmaccsu_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int16_t rs1, vuint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccsu_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, int16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m1_tumu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmaccsu_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccsu_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m2_tumu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vwmaccsu_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int16_t rs1, vuint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccsu_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, int16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m2_tumu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmaccsu_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccsu_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m4_tumu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vwmaccsu_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int16_t rs1, vuint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccsu_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, int16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m4_tumu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmaccsu_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccsu_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m8_tumu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vwmaccsu_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int16_t rs1, vuint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccsu_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, int16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m8_tumu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmaccsu_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccsu_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m1_tumu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vwmaccsu_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int32_t rs1, vuint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccsu_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, int32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m1_tumu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmaccsu_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccsu_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m2_tumu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vwmaccsu_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int32_t rs1, vuint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccsu_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, int32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m2_tumu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmaccsu_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccsu_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m4_tumu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vwmaccsu_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int32_t rs1, vuint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccsu_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, int32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m4_tumu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmaccsu_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccsu_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m8_tumu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vwmaccsu_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int32_t rs1, vuint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccsu_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, int32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m8_tumu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vwmaccsu_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccsu_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16mf4_mu(vm, vd, vs1, vs2, vl); } -vint16mf4_t test_vwmaccsu_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int8_t rs1, vuint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccsu_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, int8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16mf4_mu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmaccsu_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccsu_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16mf2_mu(vm, vd, vs1, vs2, vl); } -vint16mf2_t test_vwmaccsu_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int8_t rs1, vuint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccsu_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, int8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16mf2_mu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmaccsu_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccsu_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i16m1_mu(vm, vd, vs1, vs2, vl); } -vint16m1_t test_vwmaccsu_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int8_t rs1, vuint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccsu_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, int8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m1_mu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmaccsu_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccsu_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m2_mu(vm, vd, vs1, vs2, vl); } -vint16m2_t test_vwmaccsu_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int8_t rs1, vuint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccsu_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, int8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m2_mu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmaccsu_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccsu_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m4_mu(vm, vd, vs1, vs2, vl); } -vint16m4_t test_vwmaccsu_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int8_t rs1, vuint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccsu_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, int8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m4_mu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmaccsu_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccsu_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i16m8_mu(vm, vd, vs1, vs2, vl); } -vint16m8_t test_vwmaccsu_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int8_t rs1, vuint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccsu_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, int8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i16m8_mu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmaccsu_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccsu_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32mf2_mu(vm, vd, vs1, vs2, vl); } -vint32mf2_t test_vwmaccsu_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, int16_t rs1, vuint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccsu_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + int16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccsu_vx_i32mf2_mu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmaccsu_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccsu_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m1_mu(vm, vd, vs1, vs2, vl); } -vint32m1_t test_vwmaccsu_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int16_t rs1, vuint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccsu_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, int16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m1_mu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmaccsu_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccsu_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i32m2_mu(vm, vd, vs1, vs2, vl); } -vint32m2_t test_vwmaccsu_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int16_t rs1, vuint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccsu_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, int16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m2_mu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmaccsu_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccsu_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i32m4_mu(vm, vd, vs1, vs2, vl); } -vint32m4_t test_vwmaccsu_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int16_t rs1, vuint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccsu_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, int16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m4_mu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmaccsu_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccsu_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i32m8_mu(vm, vd, vs1, vs2, vl); } -vint32m8_t test_vwmaccsu_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int16_t rs1, vuint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccsu_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, int16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i32m8_mu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmaccsu_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccsu_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m1_mu(vm, vd, vs1, vs2, vl); } -vint64m1_t test_vwmaccsu_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int32_t rs1, vuint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccsu_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, int32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m1_mu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmaccsu_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccsu_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m2_mu(vm, vd, vs1, vs2, vl); } -vint64m2_t test_vwmaccsu_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int32_t rs1, vuint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccsu_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, int32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m2_mu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmaccsu_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccsu_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vwmaccsu_vv_i64m4_mu(vm, vd, vs1, vs2, vl); } -vint64m4_t test_vwmaccsu_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int32_t rs1, vuint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccsu_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, int32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m4_mu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmaccsu_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccsu_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vv_i64m8_mu(vm, vd, vs1, vs2, vl); } -vint64m8_t test_vwmaccsu_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int32_t rs1, vuint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccsu_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, int32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccsu_vx_i64m8_mu(vm, vd, rs1, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c b/auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c index 33b6f8c04..4dbbc6107 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c @@ -6,482 +6,670 @@ #include -vuint16mf4_t test_vwmaccu_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwmaccu_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u16mf4_tu(vd, vs1, vs2, vl); } -vuint16mf4_t test_vwmaccu_vx_u16mf4_tu(vuint16mf4_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwmaccu_vx_u16mf4_tu(vuint16mf4_t vd, uint8_t rs1, + vuint8mf8_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16mf4_tu(vd, rs1, vs2, vl); } -vuint16mf2_t test_vwmaccu_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwmaccu_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u16mf2_tu(vd, vs1, vs2, vl); } -vuint16mf2_t test_vwmaccu_vx_u16mf2_tu(vuint16mf2_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwmaccu_vx_u16mf2_tu(vuint16mf2_t vd, uint8_t rs1, + vuint8mf4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16mf2_tu(vd, rs1, vs2, vl); } -vuint16m1_t test_vwmaccu_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwmaccu_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u16m1_tu(vd, vs1, vs2, vl); } -vuint16m1_t test_vwmaccu_vx_u16m1_tu(vuint16m1_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwmaccu_vx_u16m1_tu(vuint16m1_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m1_tu(vd, rs1, vs2, vl); } -vuint16m2_t test_vwmaccu_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwmaccu_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u16m2_tu(vd, vs1, vs2, vl); } -vuint16m2_t test_vwmaccu_vx_u16m2_tu(vuint16m2_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwmaccu_vx_u16m2_tu(vuint16m2_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m2_tu(vd, rs1, vs2, vl); } -vuint16m4_t test_vwmaccu_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwmaccu_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u16m4_tu(vd, vs1, vs2, vl); } -vuint16m4_t test_vwmaccu_vx_u16m4_tu(vuint16m4_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwmaccu_vx_u16m4_tu(vuint16m4_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m4_tu(vd, rs1, vs2, vl); } -vuint16m8_t test_vwmaccu_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwmaccu_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u16m8_tu(vd, vs1, vs2, vl); } -vuint16m8_t test_vwmaccu_vx_u16m8_tu(vuint16m8_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwmaccu_vx_u16m8_tu(vuint16m8_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m8_tu(vd, rs1, vs2, vl); } -vuint32mf2_t test_vwmaccu_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwmaccu_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u32mf2_tu(vd, vs1, vs2, vl); } -vuint32mf2_t test_vwmaccu_vx_u32mf2_tu(vuint32mf2_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwmaccu_vx_u32mf2_tu(vuint32mf2_t vd, uint16_t rs1, + vuint16mf4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32mf2_tu(vd, rs1, vs2, vl); } -vuint32m1_t test_vwmaccu_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwmaccu_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u32m1_tu(vd, vs1, vs2, vl); } -vuint32m1_t test_vwmaccu_vx_u32m1_tu(vuint32m1_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwmaccu_vx_u32m1_tu(vuint32m1_t vd, uint16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m1_tu(vd, rs1, vs2, vl); } -vuint32m2_t test_vwmaccu_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwmaccu_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u32m2_tu(vd, vs1, vs2, vl); } -vuint32m2_t test_vwmaccu_vx_u32m2_tu(vuint32m2_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwmaccu_vx_u32m2_tu(vuint32m2_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m2_tu(vd, rs1, vs2, vl); } -vuint32m4_t test_vwmaccu_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwmaccu_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u32m4_tu(vd, vs1, vs2, vl); } -vuint32m4_t test_vwmaccu_vx_u32m4_tu(vuint32m4_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwmaccu_vx_u32m4_tu(vuint32m4_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m4_tu(vd, rs1, vs2, vl); } -vuint32m8_t test_vwmaccu_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwmaccu_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u32m8_tu(vd, vs1, vs2, vl); } -vuint32m8_t test_vwmaccu_vx_u32m8_tu(vuint32m8_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwmaccu_vx_u32m8_tu(vuint32m8_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m8_tu(vd, rs1, vs2, vl); } -vuint64m1_t test_vwmaccu_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwmaccu_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u64m1_tu(vd, vs1, vs2, vl); } -vuint64m1_t test_vwmaccu_vx_u64m1_tu(vuint64m1_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwmaccu_vx_u64m1_tu(vuint64m1_t vd, uint32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m1_tu(vd, rs1, vs2, vl); } -vuint64m2_t test_vwmaccu_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwmaccu_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u64m2_tu(vd, vs1, vs2, vl); } -vuint64m2_t test_vwmaccu_vx_u64m2_tu(vuint64m2_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwmaccu_vx_u64m2_tu(vuint64m2_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m2_tu(vd, rs1, vs2, vl); } -vuint64m4_t test_vwmaccu_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwmaccu_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u64m4_tu(vd, vs1, vs2, vl); } -vuint64m4_t test_vwmaccu_vx_u64m4_tu(vuint64m4_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwmaccu_vx_u64m4_tu(vuint64m4_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m4_tu(vd, rs1, vs2, vl); } -vuint64m8_t test_vwmaccu_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwmaccu_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccu_vv_u64m8_tu(vd, vs1, vs2, vl); } -vuint64m8_t test_vwmaccu_vx_u64m8_tu(vuint64m8_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwmaccu_vx_u64m8_tu(vuint64m8_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m8_tu(vd, rs1, vs2, vl); } -vuint16mf4_t test_vwmaccu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwmaccu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16mf4_tum(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vwmaccu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwmaccu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + uint8_t rs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u16mf4_tum(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vwmaccu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwmaccu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16mf2_tum(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vwmaccu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwmaccu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + uint8_t rs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u16mf2_tum(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vwmaccu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwmaccu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m1_tum(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vwmaccu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwmaccu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m1_tum(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vwmaccu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwmaccu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m2_tum(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vwmaccu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwmaccu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m2_tum(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vwmaccu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwmaccu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m4_tum(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vwmaccu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwmaccu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m4_tum(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vwmaccu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwmaccu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m8_tum(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vwmaccu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwmaccu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m8_tum(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vwmaccu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwmaccu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32mf2_tum(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vwmaccu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwmaccu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32mf2_tum(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vwmaccu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwmaccu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m1_tum(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vwmaccu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwmaccu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32m1_tum(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vwmaccu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwmaccu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m2_tum(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vwmaccu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwmaccu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + uint16_t rs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32m2_tum(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vwmaccu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwmaccu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m4_tum(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vwmaccu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwmaccu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m4_tum(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vwmaccu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwmaccu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m8_tum(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vwmaccu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwmaccu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m8_tum(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vwmaccu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwmaccu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m1_tum(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vwmaccu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwmaccu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u64m1_tum(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vwmaccu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwmaccu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m2_tum(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vwmaccu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwmaccu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + uint32_t rs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u64m2_tum(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vwmaccu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwmaccu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m4_tum(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vwmaccu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwmaccu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + uint32_t rs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u64m4_tum(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vwmaccu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwmaccu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m8_tum(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vwmaccu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwmaccu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m8_tum(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vwmaccu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwmaccu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16mf4_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vwmaccu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwmaccu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + uint8_t rs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u16mf4_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vwmaccu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwmaccu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vwmaccu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwmaccu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + uint8_t rs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u16mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vwmaccu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwmaccu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m1_tumu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vwmaccu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwmaccu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + uint8_t rs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u16m1_tumu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vwmaccu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwmaccu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m2_tumu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vwmaccu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwmaccu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m2_tumu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vwmaccu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwmaccu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m4_tumu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vwmaccu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwmaccu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m4_tumu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vwmaccu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwmaccu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m8_tumu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vwmaccu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwmaccu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m8_tumu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vwmaccu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwmaccu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32mf2_tumu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vwmaccu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwmaccu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32mf2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vwmaccu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwmaccu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m1_tumu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vwmaccu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwmaccu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + uint16_t rs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32m1_tumu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vwmaccu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwmaccu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m2_tumu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vwmaccu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwmaccu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + uint16_t rs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32m2_tumu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vwmaccu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwmaccu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m4_tumu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vwmaccu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwmaccu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + uint16_t rs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32m4_tumu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vwmaccu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwmaccu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m8_tumu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vwmaccu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwmaccu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + uint16_t rs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32m8_tumu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vwmaccu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwmaccu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m1_tumu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vwmaccu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwmaccu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + uint32_t rs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u64m1_tumu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vwmaccu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwmaccu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m2_tumu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vwmaccu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwmaccu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + uint32_t rs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u64m2_tumu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vwmaccu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwmaccu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m4_tumu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vwmaccu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwmaccu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + uint32_t rs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u64m4_tumu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vwmaccu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwmaccu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m8_tumu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vwmaccu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwmaccu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + uint32_t rs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u64m8_tumu(vm, vd, rs1, vs2, vl); } -vuint16mf4_t test_vwmaccu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs1, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwmaccu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16mf4_mu(vm, vd, vs1, vs2, vl); } -vuint16mf4_t test_vwmaccu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, uint8_t rs1, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vwmaccu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + uint8_t rs1, vuint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u16mf4_mu(vm, vd, rs1, vs2, vl); } -vuint16mf2_t test_vwmaccu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs1, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwmaccu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16mf2_mu(vm, vd, vs1, vs2, vl); } -vuint16mf2_t test_vwmaccu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, uint8_t rs1, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vwmaccu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + uint8_t rs1, vuint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u16mf2_mu(vm, vd, rs1, vs2, vl); } -vuint16m1_t test_vwmaccu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs1, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwmaccu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs1, vuint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m1_mu(vm, vd, vs1, vs2, vl); } -vuint16m1_t test_vwmaccu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint8_t rs1, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vwmaccu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, uint8_t rs1, + vuint8mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m1_mu(vm, vd, rs1, vs2, vl); } -vuint16m2_t test_vwmaccu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs1, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwmaccu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs1, vuint8m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m2_mu(vm, vd, vs1, vs2, vl); } -vuint16m2_t test_vwmaccu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint8_t rs1, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vwmaccu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, uint8_t rs1, + vuint8m1_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m2_mu(vm, vd, rs1, vs2, vl); } -vuint16m4_t test_vwmaccu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs1, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwmaccu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs1, vuint8m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m4_mu(vm, vd, vs1, vs2, vl); } -vuint16m4_t test_vwmaccu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint8_t rs1, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vwmaccu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, uint8_t rs1, + vuint8m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m4_mu(vm, vd, rs1, vs2, vl); } -vuint16m8_t test_vwmaccu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs1, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwmaccu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs1, vuint8m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u16m8_mu(vm, vd, vs1, vs2, vl); } -vuint16m8_t test_vwmaccu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint8_t rs1, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vwmaccu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, uint8_t rs1, + vuint8m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u16m8_mu(vm, vd, rs1, vs2, vl); } -vuint32mf2_t test_vwmaccu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs1, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwmaccu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32mf2_mu(vm, vd, vs1, vs2, vl); } -vuint32mf2_t test_vwmaccu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, uint16_t rs1, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vwmaccu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + uint16_t rs1, vuint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccu_vx_u32mf2_mu(vm, vd, rs1, vs2, vl); } -vuint32m1_t test_vwmaccu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs1, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwmaccu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs1, vuint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m1_mu(vm, vd, vs1, vs2, vl); } -vuint32m1_t test_vwmaccu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint16_t rs1, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vwmaccu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, uint16_t rs1, + vuint16mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m1_mu(vm, vd, rs1, vs2, vl); } -vuint32m2_t test_vwmaccu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs1, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwmaccu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs1, vuint16m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m2_mu(vm, vd, vs1, vs2, vl); } -vuint32m2_t test_vwmaccu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint16_t rs1, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vwmaccu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, uint16_t rs1, + vuint16m1_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m2_mu(vm, vd, rs1, vs2, vl); } -vuint32m4_t test_vwmaccu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs1, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwmaccu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs1, vuint16m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m4_mu(vm, vd, vs1, vs2, vl); } -vuint32m4_t test_vwmaccu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint16_t rs1, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vwmaccu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, uint16_t rs1, + vuint16m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m4_mu(vm, vd, rs1, vs2, vl); } -vuint32m8_t test_vwmaccu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs1, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwmaccu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs1, vuint16m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u32m8_mu(vm, vd, vs1, vs2, vl); } -vuint32m8_t test_vwmaccu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint16_t rs1, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vwmaccu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, uint16_t rs1, + vuint16m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u32m8_mu(vm, vd, rs1, vs2, vl); } -vuint64m1_t test_vwmaccu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs1, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwmaccu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs1, vuint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m1_mu(vm, vd, vs1, vs2, vl); } -vuint64m1_t test_vwmaccu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint32_t rs1, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vwmaccu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, uint32_t rs1, + vuint32mf2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m1_mu(vm, vd, rs1, vs2, vl); } -vuint64m2_t test_vwmaccu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs1, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwmaccu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs1, vuint32m1_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m2_mu(vm, vd, vs1, vs2, vl); } -vuint64m2_t test_vwmaccu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint32_t rs1, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vwmaccu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, uint32_t rs1, + vuint32m1_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m2_mu(vm, vd, rs1, vs2, vl); } -vuint64m4_t test_vwmaccu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs1, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwmaccu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs1, vuint32m2_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m4_mu(vm, vd, vs1, vs2, vl); } -vuint64m4_t test_vwmaccu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint32_t rs1, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vwmaccu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, uint32_t rs1, + vuint32m2_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m4_mu(vm, vd, rs1, vs2, vl); } -vuint64m8_t test_vwmaccu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs1, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwmaccu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs1, vuint32m4_t vs2, + size_t vl) { return __riscv_vwmaccu_vv_u64m8_mu(vm, vd, vs1, vs2, vl); } -vuint64m8_t test_vwmaccu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint32_t rs1, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vwmaccu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, uint32_t rs1, + vuint32m4_t vs2, size_t vl) { return __riscv_vwmaccu_vx_u64m8_mu(vm, vd, rs1, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c b/auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c index 508919bab..56be15183 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c @@ -6,242 +6,314 @@ #include -vint16mf4_t test_vwmaccus_vx_i16mf4_tu(vint16mf4_t vd, uint8_t rs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccus_vx_i16mf4_tu(vint16mf4_t vd, uint8_t rs1, + vint8mf8_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16mf4_tu(vd, rs1, vs2, vl); } -vint16mf2_t test_vwmaccus_vx_i16mf2_tu(vint16mf2_t vd, uint8_t rs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccus_vx_i16mf2_tu(vint16mf2_t vd, uint8_t rs1, + vint8mf4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16mf2_tu(vd, rs1, vs2, vl); } -vint16m1_t test_vwmaccus_vx_i16m1_tu(vint16m1_t vd, uint8_t rs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccus_vx_i16m1_tu(vint16m1_t vd, uint8_t rs1, vint8mf2_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i16m1_tu(vd, rs1, vs2, vl); } -vint16m2_t test_vwmaccus_vx_i16m2_tu(vint16m2_t vd, uint8_t rs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccus_vx_i16m2_tu(vint16m2_t vd, uint8_t rs1, vint8m1_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i16m2_tu(vd, rs1, vs2, vl); } -vint16m4_t test_vwmaccus_vx_i16m4_tu(vint16m4_t vd, uint8_t rs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccus_vx_i16m4_tu(vint16m4_t vd, uint8_t rs1, vint8m2_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i16m4_tu(vd, rs1, vs2, vl); } -vint16m8_t test_vwmaccus_vx_i16m8_tu(vint16m8_t vd, uint8_t rs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccus_vx_i16m8_tu(vint16m8_t vd, uint8_t rs1, vint8m4_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i16m8_tu(vd, rs1, vs2, vl); } -vint32mf2_t test_vwmaccus_vx_i32mf2_tu(vint32mf2_t vd, uint16_t rs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccus_vx_i32mf2_tu(vint32mf2_t vd, uint16_t rs1, + vint16mf4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32mf2_tu(vd, rs1, vs2, vl); } -vint32m1_t test_vwmaccus_vx_i32m1_tu(vint32m1_t vd, uint16_t rs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccus_vx_i32m1_tu(vint32m1_t vd, uint16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m1_tu(vd, rs1, vs2, vl); } -vint32m2_t test_vwmaccus_vx_i32m2_tu(vint32m2_t vd, uint16_t rs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccus_vx_i32m2_tu(vint32m2_t vd, uint16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m2_tu(vd, rs1, vs2, vl); } -vint32m4_t test_vwmaccus_vx_i32m4_tu(vint32m4_t vd, uint16_t rs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccus_vx_i32m4_tu(vint32m4_t vd, uint16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m4_tu(vd, rs1, vs2, vl); } -vint32m8_t test_vwmaccus_vx_i32m8_tu(vint32m8_t vd, uint16_t rs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccus_vx_i32m8_tu(vint32m8_t vd, uint16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m8_tu(vd, rs1, vs2, vl); } -vint64m1_t test_vwmaccus_vx_i64m1_tu(vint64m1_t vd, uint32_t rs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccus_vx_i64m1_tu(vint64m1_t vd, uint32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m1_tu(vd, rs1, vs2, vl); } -vint64m2_t test_vwmaccus_vx_i64m2_tu(vint64m2_t vd, uint32_t rs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccus_vx_i64m2_tu(vint64m2_t vd, uint32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m2_tu(vd, rs1, vs2, vl); } -vint64m4_t test_vwmaccus_vx_i64m4_tu(vint64m4_t vd, uint32_t rs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccus_vx_i64m4_tu(vint64m4_t vd, uint32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m4_tu(vd, rs1, vs2, vl); } -vint64m8_t test_vwmaccus_vx_i64m8_tu(vint64m8_t vd, uint32_t rs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccus_vx_i64m8_tu(vint64m8_t vd, uint32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m8_tu(vd, rs1, vs2, vl); } -vint16mf4_t test_vwmaccus_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, uint8_t rs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccus_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + uint8_t rs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i16mf4_tum(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmaccus_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, uint8_t rs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccus_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + uint8_t rs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i16mf2_tum(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmaccus_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, uint8_t rs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccus_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, uint8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m1_tum(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmaccus_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, uint8_t rs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccus_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, uint8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m2_tum(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmaccus_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, uint8_t rs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccus_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, uint8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m4_tum(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmaccus_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, uint8_t rs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccus_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, uint8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m8_tum(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmaccus_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, uint16_t rs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccus_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + uint16_t rs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i32mf2_tum(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmaccus_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, uint16_t rs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccus_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, uint16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m1_tum(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmaccus_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, uint16_t rs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccus_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, uint16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m2_tum(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmaccus_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, uint16_t rs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccus_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, uint16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m4_tum(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmaccus_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, uint16_t rs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccus_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, uint16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m8_tum(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmaccus_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, uint32_t rs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccus_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, uint32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m1_tum(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmaccus_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, uint32_t rs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccus_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, uint32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m2_tum(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmaccus_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, uint32_t rs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccus_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, uint32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m4_tum(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmaccus_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, uint32_t rs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccus_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, uint32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m8_tum(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vwmaccus_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, uint8_t rs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccus_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + uint8_t rs1, vint8mf8_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i16mf4_tumu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmaccus_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, uint8_t rs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccus_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + uint8_t rs1, vint8mf4_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i16mf2_tumu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmaccus_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, uint8_t rs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccus_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, uint8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m1_tumu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmaccus_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, uint8_t rs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccus_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, uint8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m2_tumu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmaccus_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, uint8_t rs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccus_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, uint8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m4_tumu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmaccus_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, uint8_t rs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccus_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, uint8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m8_tumu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmaccus_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, uint16_t rs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccus_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + uint16_t rs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i32mf2_tumu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmaccus_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, uint16_t rs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccus_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + uint16_t rs1, vint16mf2_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i32m1_tumu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmaccus_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, uint16_t rs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccus_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + uint16_t rs1, vint16m1_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i32m2_tumu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmaccus_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, uint16_t rs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccus_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, uint16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m4_tumu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmaccus_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, uint16_t rs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccus_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, uint16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m8_tumu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmaccus_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, uint32_t rs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccus_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + uint32_t rs1, vint32mf2_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i64m1_tumu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmaccus_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, uint32_t rs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccus_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + uint32_t rs1, vint32m1_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i64m2_tumu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmaccus_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, uint32_t rs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccus_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + uint32_t rs1, vint32m2_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i64m4_tumu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmaccus_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, uint32_t rs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccus_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, uint32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m8_tumu(vm, vd, rs1, vs2, vl); } -vint16mf4_t test_vwmaccus_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, uint8_t rs1, vint8mf8_t vs2, size_t vl) { +vint16mf4_t test_vwmaccus_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + uint8_t rs1, vint8mf8_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16mf4_mu(vm, vd, rs1, vs2, vl); } -vint16mf2_t test_vwmaccus_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, uint8_t rs1, vint8mf4_t vs2, size_t vl) { +vint16mf2_t test_vwmaccus_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + uint8_t rs1, vint8mf4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16mf2_mu(vm, vd, rs1, vs2, vl); } -vint16m1_t test_vwmaccus_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, uint8_t rs1, vint8mf2_t vs2, size_t vl) { +vint16m1_t test_vwmaccus_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, uint8_t rs1, + vint8mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m1_mu(vm, vd, rs1, vs2, vl); } -vint16m2_t test_vwmaccus_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, uint8_t rs1, vint8m1_t vs2, size_t vl) { +vint16m2_t test_vwmaccus_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, uint8_t rs1, + vint8m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m2_mu(vm, vd, rs1, vs2, vl); } -vint16m4_t test_vwmaccus_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, uint8_t rs1, vint8m2_t vs2, size_t vl) { +vint16m4_t test_vwmaccus_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, uint8_t rs1, + vint8m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m4_mu(vm, vd, rs1, vs2, vl); } -vint16m8_t test_vwmaccus_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, uint8_t rs1, vint8m4_t vs2, size_t vl) { +vint16m8_t test_vwmaccus_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, uint8_t rs1, + vint8m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i16m8_mu(vm, vd, rs1, vs2, vl); } -vint32mf2_t test_vwmaccus_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, uint16_t rs1, vint16mf4_t vs2, size_t vl) { +vint32mf2_t test_vwmaccus_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + uint16_t rs1, vint16mf4_t vs2, + size_t vl) { return __riscv_vwmaccus_vx_i32mf2_mu(vm, vd, rs1, vs2, vl); } -vint32m1_t test_vwmaccus_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, uint16_t rs1, vint16mf2_t vs2, size_t vl) { +vint32m1_t test_vwmaccus_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, uint16_t rs1, + vint16mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m1_mu(vm, vd, rs1, vs2, vl); } -vint32m2_t test_vwmaccus_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, uint16_t rs1, vint16m1_t vs2, size_t vl) { +vint32m2_t test_vwmaccus_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, uint16_t rs1, + vint16m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m2_mu(vm, vd, rs1, vs2, vl); } -vint32m4_t test_vwmaccus_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, uint16_t rs1, vint16m2_t vs2, size_t vl) { +vint32m4_t test_vwmaccus_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, uint16_t rs1, + vint16m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m4_mu(vm, vd, rs1, vs2, vl); } -vint32m8_t test_vwmaccus_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, uint16_t rs1, vint16m4_t vs2, size_t vl) { +vint32m8_t test_vwmaccus_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, uint16_t rs1, + vint16m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i32m8_mu(vm, vd, rs1, vs2, vl); } -vint64m1_t test_vwmaccus_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, uint32_t rs1, vint32mf2_t vs2, size_t vl) { +vint64m1_t test_vwmaccus_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, uint32_t rs1, + vint32mf2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m1_mu(vm, vd, rs1, vs2, vl); } -vint64m2_t test_vwmaccus_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, uint32_t rs1, vint32m1_t vs2, size_t vl) { +vint64m2_t test_vwmaccus_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, uint32_t rs1, + vint32m1_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m2_mu(vm, vd, rs1, vs2, vl); } -vint64m4_t test_vwmaccus_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, uint32_t rs1, vint32m2_t vs2, size_t vl) { +vint64m4_t test_vwmaccus_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, uint32_t rs1, + vint32m2_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m4_mu(vm, vd, rs1, vs2, vl); } -vint64m8_t test_vwmaccus_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, uint32_t rs1, vint32m4_t vs2, size_t vl) { +vint64m8_t test_vwmaccus_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, uint32_t rs1, + vint32m4_t vs2, size_t vl) { return __riscv_vwmaccus_vx_i64m8_mu(vm, vd, rs1, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmul.c b/auto-generated/policy_funcs/llvm-api-tests/vwmul.c index 72876a034..b8d015ad5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmul.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmul.c @@ -5,482 +5,611 @@ #include -vint16mf4_t test_vwmul_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwmul_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vwmul_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vwmul_vx_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwmul_vx_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwmul_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vwmul_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwmul_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vwmul_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vwmul_vx_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwmul_vx_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwmul_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vwmul_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwmul_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vwmul_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwmul_vx_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwmul_vx_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwmul_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vwmul_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwmul_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vwmul_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vwmul_vx_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwmul_vx_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwmul_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vwmul_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwmul_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vwmul_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vwmul_vx_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwmul_vx_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwmul_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vwmul_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwmul_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vwmul_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vwmul_vx_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwmul_vx_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwmul_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vwmul_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwmul_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vwmul_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vwmul_vx_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwmul_vx_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vwmul_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwmul_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwmul_vx_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwmul_vx_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwmul_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vwmul_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwmul_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwmul_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vwmul_vx_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwmul_vx_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwmul_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vwmul_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwmul_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vwmul_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vwmul_vx_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwmul_vx_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwmul_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vwmul_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwmul_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vwmul_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vwmul_vx_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwmul_vx_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwmul_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vwmul_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwmul_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwmul_vx_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwmul_vx_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwmul_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vwmul_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwmul_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwmul_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vwmul_vx_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwmul_vx_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwmul_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vwmul_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwmul_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vwmul_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vwmul_vx_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwmul_vx_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwmul_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vwmul_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwmul_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vwmul_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vwmul_vx_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwmul_vx_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwmul_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vwmul_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwmul_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwmul_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwmul_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwmul_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwmul_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwmul_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwmul_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwmul_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwmul_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwmul_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwmul_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwmul_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwmul_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwmul_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwmul_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwmul_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwmul_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwmul_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwmul_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwmul_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwmul_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwmul_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwmul_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwmul_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwmul_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwmul_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwmul_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwmul_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwmul_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwmul_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwmul_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwmul_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwmul_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwmul_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwmul_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwmul_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwmul_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwmul_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwmul_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwmul_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwmul_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwmul_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwmul_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwmul_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwmul_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwmul_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwmul_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwmul_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwmul_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwmul_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwmul_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwmul_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwmul_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwmul_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwmul_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwmul_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwmul_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwmul_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwmul_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwmul_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwmul_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwmul_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwmul_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwmul_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwmul_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwmul_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwmul_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwmul_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwmul_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwmul_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwmul_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwmul_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwmul_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwmul_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwmul_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwmul_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwmul_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwmul_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwmul_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwmul_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwmul_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwmul_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwmul_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwmul_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwmul_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwmul_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwmul_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwmul_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwmul_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwmul_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwmul_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwmul_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwmul_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwmul_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwmul_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vwmul_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwmul_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwmul_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwmul_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwmul_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwmul_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwmul_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwmul_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwmul_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwmul_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwmul_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwmul_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwmul_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwmul_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwmul_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwmul_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwmul_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vwmul_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwmul_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwmul_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwmul_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwmul_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwmul_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwmul_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwmul_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwmul_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwmul_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwmul_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwmul_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwmul_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwmul_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwmul_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwmul_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwmul_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { return __riscv_vwmul_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwmul_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwmul_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwmul_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwmul_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { return __riscv_vwmul_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwmul_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwmul_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwmul_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwmul_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwmul_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwmul_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwmul_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwmul_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwmul_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwmul_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwmul_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwmul_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwmul_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwmul_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwmul_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwmul_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwmul_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwmul_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwmul_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwmul_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwmul_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwmul_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwmul_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwmul_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwmul_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwmul_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwmul_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwmul_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwmul_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwmul_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwmul_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwmul_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwmul_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwmul_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwmul_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwmul_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwmul_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwmul_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwmul_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwmul_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwmul_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwmul_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwmul_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwmul_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwmul_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwmul_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwmul_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwmul_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwmul_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwmul_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwmul_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwmul_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwmul_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwmul_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwmul_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwmul_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwmul_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwmul_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwmul_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmulsu.c b/auto-generated/policy_funcs/llvm-api-tests/vwmulsu.c index 4d0a884b2..d3f726780 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmulsu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmulsu.c @@ -5,482 +5,635 @@ #include -vint16mf4_t test_vwmulsu_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwmulsu_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vwmulsu_vx_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, uint8_t rs1, size_t vl) { +vint16mf4_t test_vwmulsu_vx_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vwmulsu_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwmulsu_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vwmulsu_vx_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, uint8_t rs1, size_t vl) { +vint16mf2_t test_vwmulsu_vx_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vwmulsu_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwmulsu_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwmulsu_vx_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, uint8_t rs1, size_t vl) { +vint16m1_t test_vwmulsu_vx_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vwmulsu_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint16m2_t test_vwmulsu_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vwmulsu_vx_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, uint8_t rs1, size_t vl) { +vint16m2_t test_vwmulsu_vx_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vwmulsu_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint16m4_t test_vwmulsu_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vwmulsu_vx_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, uint8_t rs1, size_t vl) { +vint16m4_t test_vwmulsu_vx_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vwmulsu_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint16m8_t test_vwmulsu_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vwmulsu_vx_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, uint8_t rs1, size_t vl) { +vint16m8_t test_vwmulsu_vx_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vwmulsu_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwmulsu_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vwmulsu_vx_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, uint16_t rs1, size_t vl) { +vint32mf2_t test_vwmulsu_vx_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vwmulsu_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwmulsu_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwmulsu_vx_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, uint16_t rs1, size_t vl) { +vint32m1_t test_vwmulsu_vx_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vwmulsu_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint32m2_t test_vwmulsu_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vwmulsu_vx_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, uint16_t rs1, size_t vl) { +vint32m2_t test_vwmulsu_vx_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vwmulsu_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint32m4_t test_vwmulsu_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vwmulsu_vx_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, uint16_t rs1, size_t vl) { +vint32m4_t test_vwmulsu_vx_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vwmulsu_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint32m8_t test_vwmulsu_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vwmulsu_vx_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, uint16_t rs1, size_t vl) { +vint32m8_t test_vwmulsu_vx_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vwmulsu_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwmulsu_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwmulsu_vx_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, uint32_t rs1, size_t vl) { +vint64m1_t test_vwmulsu_vx_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vwmulsu_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint64m2_t test_vwmulsu_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vwmulsu_vx_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, uint32_t rs1, size_t vl) { +vint64m2_t test_vwmulsu_vx_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vwmulsu_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint64m4_t test_vwmulsu_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vwmulsu_vx_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, uint32_t rs1, size_t vl) { +vint64m4_t test_vwmulsu_vx_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vwmulsu_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint64m8_t test_vwmulsu_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vwmulsu_vx_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, uint32_t rs1, size_t vl) { +vint64m8_t test_vwmulsu_vx_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vwmulsu_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwmulsu_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwmulsu_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, uint8_t rs1, size_t vl) { +vint16mf4_t test_vwmulsu_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwmulsu_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwmulsu_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwmulsu_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, uint8_t rs1, size_t vl) { +vint16mf2_t test_vwmulsu_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwmulsu_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwmulsu_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwmulsu_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, uint8_t rs1, size_t vl) { +vint16m1_t test_vwmulsu_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwmulsu_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint16m2_t test_vwmulsu_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwmulsu_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, uint8_t rs1, size_t vl) { +vint16m2_t test_vwmulsu_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwmulsu_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint16m4_t test_vwmulsu_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwmulsu_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, uint8_t rs1, size_t vl) { +vint16m4_t test_vwmulsu_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwmulsu_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint16m8_t test_vwmulsu_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwmulsu_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, uint8_t rs1, size_t vl) { +vint16m8_t test_vwmulsu_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwmulsu_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwmulsu_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwmulsu_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, uint16_t rs1, size_t vl) { +vint32mf2_t test_vwmulsu_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwmulsu_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwmulsu_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwmulsu_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, uint16_t rs1, size_t vl) { +vint32m1_t test_vwmulsu_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwmulsu_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint32m2_t test_vwmulsu_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwmulsu_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, uint16_t rs1, size_t vl) { +vint32m2_t test_vwmulsu_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwmulsu_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint32m4_t test_vwmulsu_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwmulsu_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, uint16_t rs1, size_t vl) { +vint32m4_t test_vwmulsu_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwmulsu_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint32m8_t test_vwmulsu_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwmulsu_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, uint16_t rs1, size_t vl) { +vint32m8_t test_vwmulsu_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwmulsu_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwmulsu_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwmulsu_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, uint32_t rs1, size_t vl) { +vint64m1_t test_vwmulsu_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwmulsu_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint64m2_t test_vwmulsu_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwmulsu_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, uint32_t rs1, size_t vl) { +vint64m2_t test_vwmulsu_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwmulsu_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint64m4_t test_vwmulsu_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwmulsu_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, uint32_t rs1, size_t vl) { +vint64m4_t test_vwmulsu_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwmulsu_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint64m8_t test_vwmulsu_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwmulsu_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, uint32_t rs1, size_t vl) { +vint64m8_t test_vwmulsu_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwmulsu_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwmulsu_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwmulsu_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, uint8_t rs1, size_t vl) { +vint16mf4_t test_vwmulsu_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwmulsu_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwmulsu_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwmulsu_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, uint8_t rs1, size_t vl) { +vint16mf2_t test_vwmulsu_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwmulsu_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwmulsu_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwmulsu_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, uint8_t rs1, size_t vl) { +vint16m1_t test_vwmulsu_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwmulsu_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint16m2_t test_vwmulsu_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwmulsu_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, uint8_t rs1, size_t vl) { +vint16m2_t test_vwmulsu_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwmulsu_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint16m4_t test_vwmulsu_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwmulsu_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, uint8_t rs1, size_t vl) { +vint16m4_t test_vwmulsu_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwmulsu_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint16m8_t test_vwmulsu_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwmulsu_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, uint8_t rs1, size_t vl) { +vint16m8_t test_vwmulsu_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwmulsu_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwmulsu_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwmulsu_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, uint16_t rs1, size_t vl) { +vint32mf2_t test_vwmulsu_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwmulsu_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwmulsu_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwmulsu_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, uint16_t rs1, size_t vl) { +vint32m1_t test_vwmulsu_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwmulsu_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint32m2_t test_vwmulsu_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwmulsu_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, uint16_t rs1, size_t vl) { +vint32m2_t test_vwmulsu_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, + vint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwmulsu_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint32m4_t test_vwmulsu_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwmulsu_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, uint16_t rs1, size_t vl) { +vint32m4_t test_vwmulsu_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, + vint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwmulsu_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint32m8_t test_vwmulsu_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwmulsu_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, uint16_t rs1, size_t vl) { +vint32m8_t test_vwmulsu_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, + vint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwmulsu_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwmulsu_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwmulsu_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, uint32_t rs1, size_t vl) { +vint64m1_t test_vwmulsu_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwmulsu_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint64m2_t test_vwmulsu_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwmulsu_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, uint32_t rs1, size_t vl) { +vint64m2_t test_vwmulsu_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, + vint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwmulsu_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint64m4_t test_vwmulsu_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwmulsu_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, uint32_t rs1, size_t vl) { +vint64m4_t test_vwmulsu_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, + vint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwmulsu_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint64m8_t test_vwmulsu_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwmulsu_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, uint32_t rs1, size_t vl) { +vint64m8_t test_vwmulsu_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, + vint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwmulsu_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwmulsu_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwmulsu_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, uint8_t rs1, size_t vl) { +vint16mf4_t test_vwmulsu_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwmulsu_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwmulsu_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwmulsu_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, uint8_t rs1, size_t vl) { +vint16mf2_t test_vwmulsu_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwmulsu_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwmulsu_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwmulsu_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, uint8_t rs1, size_t vl) { +vint16m1_t test_vwmulsu_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwmulsu_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vint16m2_t test_vwmulsu_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwmulsu_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, uint8_t rs1, size_t vl) { +vint16m2_t test_vwmulsu_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwmulsu_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vint16m4_t test_vwmulsu_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwmulsu_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, uint8_t rs1, size_t vl) { +vint16m4_t test_vwmulsu_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwmulsu_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vint16m8_t test_vwmulsu_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwmulsu_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, uint8_t rs1, size_t vl) { +vint16m8_t test_vwmulsu_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwmulsu_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwmulsu_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwmulsu_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, uint16_t rs1, size_t vl) { +vint32mf2_t test_vwmulsu_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulsu_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwmulsu_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwmulsu_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwmulsu_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, uint16_t rs1, size_t vl) { +vint32m1_t test_vwmulsu_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwmulsu_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vint32m2_t test_vwmulsu_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwmulsu_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, uint16_t rs1, size_t vl) { +vint32m2_t test_vwmulsu_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwmulsu_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vint32m4_t test_vwmulsu_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwmulsu_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, uint16_t rs1, size_t vl) { +vint32m4_t test_vwmulsu_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwmulsu_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vint32m8_t test_vwmulsu_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwmulsu_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, uint16_t rs1, size_t vl) { +vint32m8_t test_vwmulsu_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwmulsu_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwmulsu_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwmulsu_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwmulsu_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, uint32_t rs1, size_t vl) { +vint64m1_t test_vwmulsu_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwmulsu_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vint64m2_t test_vwmulsu_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwmulsu_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, uint32_t rs1, size_t vl) { +vint64m2_t test_vwmulsu_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwmulsu_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vint64m4_t test_vwmulsu_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwmulsu_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, uint32_t rs1, size_t vl) { +vint64m4_t test_vwmulsu_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwmulsu_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vint64m8_t test_vwmulsu_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwmulsu_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwmulsu_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, uint32_t rs1, size_t vl) { +vint64m8_t test_vwmulsu_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulsu_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmulu.c b/auto-generated/policy_funcs/llvm-api-tests/vwmulu.c index 4ac20c220..373536210 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmulu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmulu.c @@ -5,482 +5,661 @@ #include -vuint16mf4_t test_vwmulu_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwmulu_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwmulu_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwmulu_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwmulu_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwmulu_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwmulu_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwmulu_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwmulu_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwmulu_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwmulu_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwmulu_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwmulu_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwmulu_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwmulu_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwmulu_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwmulu_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwmulu_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwmulu_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwmulu_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwmulu_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwmulu_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwmulu_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwmulu_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwmulu_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwmulu_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwmulu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwmulu_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwmulu_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwmulu_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwmulu_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwmulu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwmulu_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwmulu_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwmulu_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwmulu_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwmulu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwmulu_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwmulu_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwmulu_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwmulu_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwmulu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwmulu_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwmulu_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwmulu_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwmulu_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwmulu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwmulu_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwmulu_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwmulu_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwmulu_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwmulu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwmulu_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwmulu_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwmulu_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwmulu_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwmulu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwmulu_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwmulu_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwmulu_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwmulu_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwmulu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwmulu_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwmulu_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwmulu_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwmulu_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwmulu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwmulu_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwmulu_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwmulu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwmulu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwmulu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwmulu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwmulu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwmulu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwmulu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwmulu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwmulu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwmulu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwmulu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwmulu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwmulu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwmulu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwmulu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwmulu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwmulu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwmulu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwmulu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwmulu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwmulu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwmulu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwmulu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwmulu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwmulu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwmulu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwmulu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwmulu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwmulu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwmulu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwmulu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwmulu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwmulu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwmulu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwmulu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwmulu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwmulu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwmulu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwmulu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwmulu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwmulu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwmulu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwmulu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwmulu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwmulu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwmulu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwmulu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwmulu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwmulu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwmulu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwmulu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwmulu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwmulu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwmulu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwmulu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwmulu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwmulu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwmulu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwmulu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwmulu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwmulu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwmulu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwmulu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwmulu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwmulu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwmulu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwmulu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwmulu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwmulu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwmulu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwmulu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwmulu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwmulu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwmulu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwmulu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwmulu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwmulu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwmulu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwmulu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwmulu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwmulu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwmulu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwmulu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwmulu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwmulu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwmulu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwmulu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwmulu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwmulu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwmulu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwmulu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwmulu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwmulu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwmulu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwmulu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwmulu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwmulu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwmulu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwmulu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwmulu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwmulu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwmulu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwmulu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwmulu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwmulu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwmulu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwmulu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwmulu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwmulu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwmulu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwmulu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwmulu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwmulu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwmulu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwmulu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwmulu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwmulu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwmulu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwmulu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwmulu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwmulu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwmulu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwmulu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwmulu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwmulu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwmulu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwmulu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwmulu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwmulu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwmulu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwmulu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwmulu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwmulu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwmulu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwmulu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwmulu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwmulu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwmulu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwmulu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwmulu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwmulu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwmulu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwmulu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwmulu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwmulu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwmulu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwmulu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwmulu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwmulu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwmulu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwmulu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwmulu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwmulu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwmulu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwmulu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwmulu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwmulu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwmulu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwmulu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwmulu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwmulu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwmulu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwmulu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwmulu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwmulu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwmulu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwmulu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwmulu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwmulu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwmulu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwmulu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwmulu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwmulu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwmulu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwmulu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwmulu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwmulu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwmulu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwmulu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwmulu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwmulu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwmulu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwmulu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwmulu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwmulu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwmulu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwredsum.c b/auto-generated/policy_funcs/llvm-api-tests/vwredsum.c index 5d9cfb5d8..6f40234d2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwredsum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwredsum.c @@ -5,146 +5,200 @@ #include -vint16m1_t test_vwredsum_vs_i8mf8_i16m1_tu(vint16m1_t vd, vint8mf8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8mf8_i16m1_tu(vint16m1_t vd, vint8mf8_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i8mf8_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8mf4_i16m1_tu(vint16m1_t vd, vint8mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8mf4_i16m1_tu(vint16m1_t vd, vint8mf4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i8mf4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8mf2_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8mf2_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i8mf2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8m1_i16m1_tu(vint16m1_t vd, vint8m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8m1_i16m1_tu(vint16m1_t vd, vint8m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i8m1_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8m2_i16m1_tu(vint16m1_t vd, vint8m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8m2_i16m1_tu(vint16m1_t vd, vint8m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i8m2_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8m4_i16m1_tu(vint16m1_t vd, vint8m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8m4_i16m1_tu(vint16m1_t vd, vint8m4_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i8m4_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8m8_i16m1_tu(vint16m1_t vd, vint8m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8m8_i16m1_tu(vint16m1_t vd, vint8m8_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i8m8_i16m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16mf4_i32m1_tu(vint32m1_t vd, vint16mf4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16mf4_i32m1_tu(vint32m1_t vd, vint16mf4_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i16mf4_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16mf2_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16mf2_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i16mf2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16m1_i32m1_tu(vint32m1_t vd, vint16m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16m1_i32m1_tu(vint32m1_t vd, vint16m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i16m1_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16m2_i32m1_tu(vint32m1_t vd, vint16m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16m2_i32m1_tu(vint32m1_t vd, vint16m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i16m2_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16m4_i32m1_tu(vint32m1_t vd, vint16m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16m4_i32m1_tu(vint32m1_t vd, vint16m4_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i16m4_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16m8_i32m1_tu(vint32m1_t vd, vint16m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16m8_i32m1_tu(vint32m1_t vd, vint16m8_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i16m8_i32m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32mf2_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32mf2_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i32mf2_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32m1_i64m1_tu(vint64m1_t vd, vint32m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32m1_i64m1_tu(vint64m1_t vd, vint32m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i32m1_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32m2_i64m1_tu(vint64m1_t vd, vint32m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32m2_i64m1_tu(vint64m1_t vd, vint32m2_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i32m2_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32m4_i64m1_tu(vint64m1_t vd, vint32m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32m4_i64m1_tu(vint64m1_t vd, vint32m4_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i32m4_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32m8_i64m1_tu(vint64m1_t vd, vint32m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32m8_i64m1_tu(vint64m1_t vd, vint32m8_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vwredsum_vs_i32m8_i64m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8mf8_i16m1_tum(vbool64_t vm, vint16m1_t vd, vint8mf8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8mf8_i16m1_tum(vbool64_t vm, vint16m1_t vd, + vint8mf8_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i8mf8_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8mf4_i16m1_tum(vbool32_t vm, vint16m1_t vd, vint8mf4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8mf4_i16m1_tum(vbool32_t vm, vint16m1_t vd, + vint8mf4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i8mf4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8mf2_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8mf2_i16m1_tum(vbool16_t vm, vint16m1_t vd, + vint8mf2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i8mf2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8m1_i16m1_tum(vbool8_t vm, vint16m1_t vd, vint8m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8m1_i16m1_tum(vbool8_t vm, vint16m1_t vd, + vint8m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i8m1_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8m2_i16m1_tum(vbool4_t vm, vint16m1_t vd, vint8m2_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8m2_i16m1_tum(vbool4_t vm, vint16m1_t vd, + vint8m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i8m2_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8m4_i16m1_tum(vbool2_t vm, vint16m1_t vd, vint8m4_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8m4_i16m1_tum(vbool2_t vm, vint16m1_t vd, + vint8m4_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i8m4_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwredsum_vs_i8m8_i16m1_tum(vbool1_t vm, vint16m1_t vd, vint8m8_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vwredsum_vs_i8m8_i16m1_tum(vbool1_t vm, vint16m1_t vd, + vint8m8_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i8m8_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16mf4_i32m1_tum(vbool64_t vm, vint32m1_t vd, vint16mf4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16mf4_i32m1_tum(vbool64_t vm, vint32m1_t vd, + vint16mf4_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i16mf4_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16mf2_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16mf2_i32m1_tum(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i16mf2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16m1_i32m1_tum(vbool16_t vm, vint32m1_t vd, vint16m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16m1_i32m1_tum(vbool16_t vm, vint32m1_t vd, + vint16m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i16m1_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16m2_i32m1_tum(vbool8_t vm, vint32m1_t vd, vint16m2_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16m2_i32m1_tum(vbool8_t vm, vint32m1_t vd, + vint16m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i16m2_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16m4_i32m1_tum(vbool4_t vm, vint32m1_t vd, vint16m4_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16m4_i32m1_tum(vbool4_t vm, vint32m1_t vd, + vint16m4_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i16m4_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwredsum_vs_i16m8_i32m1_tum(vbool2_t vm, vint32m1_t vd, vint16m8_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vwredsum_vs_i16m8_i32m1_tum(vbool2_t vm, vint32m1_t vd, + vint16m8_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i16m8_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32mf2_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32mf2_i64m1_tum(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i32mf2_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32m1_i64m1_tum(vbool32_t vm, vint64m1_t vd, vint32m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32m1_i64m1_tum(vbool32_t vm, vint64m1_t vd, + vint32m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i32m1_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32m2_i64m1_tum(vbool16_t vm, vint64m1_t vd, vint32m2_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32m2_i64m1_tum(vbool16_t vm, vint64m1_t vd, + vint32m2_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i32m2_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32m4_i64m1_tum(vbool8_t vm, vint64m1_t vd, vint32m4_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32m4_i64m1_tum(vbool8_t vm, vint64m1_t vd, + vint32m4_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i32m4_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwredsum_vs_i32m8_i64m1_tum(vbool4_t vm, vint64m1_t vd, vint32m8_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vwredsum_vs_i32m8_i64m1_tum(vbool4_t vm, vint64m1_t vd, + vint32m8_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vwredsum_vs_i32m8_i64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwredsumu.c b/auto-generated/policy_funcs/llvm-api-tests/vwredsumu.c index ddfc811ac..487d0a2d5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwredsumu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwredsumu.c @@ -5,146 +5,200 @@ #include -vuint16m1_t test_vwredsumu_vs_u8mf8_u16m1_tu(vuint16m1_t vd, vuint8mf8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8mf8_u16m1_tu(vuint16m1_t vd, vuint8mf8_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u8mf8_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8mf4_u16m1_tu(vuint16m1_t vd, vuint8mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8mf4_u16m1_tu(vuint16m1_t vd, vuint8mf4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u8mf4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8mf2_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8mf2_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u8mf2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8m1_u16m1_tu(vuint16m1_t vd, vuint8m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8m1_u16m1_tu(vuint16m1_t vd, vuint8m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u8m1_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8m2_u16m1_tu(vuint16m1_t vd, vuint8m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8m2_u16m1_tu(vuint16m1_t vd, vuint8m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u8m2_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8m4_u16m1_tu(vuint16m1_t vd, vuint8m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8m4_u16m1_tu(vuint16m1_t vd, vuint8m4_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u8m4_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8m8_u16m1_tu(vuint16m1_t vd, vuint8m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8m8_u16m1_tu(vuint16m1_t vd, vuint8m8_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u8m8_u16m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16mf4_u32m1_tu(vuint32m1_t vd, vuint16mf4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16mf4_u32m1_tu(vuint32m1_t vd, vuint16mf4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u16mf4_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16mf2_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16mf2_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u16mf2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16m1_u32m1_tu(vuint32m1_t vd, vuint16m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16m1_u32m1_tu(vuint32m1_t vd, vuint16m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u16m1_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16m2_u32m1_tu(vuint32m1_t vd, vuint16m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16m2_u32m1_tu(vuint32m1_t vd, vuint16m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u16m2_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16m4_u32m1_tu(vuint32m1_t vd, vuint16m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16m4_u32m1_tu(vuint32m1_t vd, vuint16m4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u16m4_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16m8_u32m1_tu(vuint32m1_t vd, vuint16m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16m8_u32m1_tu(vuint32m1_t vd, vuint16m8_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u16m8_u32m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32mf2_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32mf2_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u32mf2_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32m1_u64m1_tu(vuint64m1_t vd, vuint32m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32m1_u64m1_tu(vuint64m1_t vd, vuint32m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u32m1_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32m2_u64m1_tu(vuint64m1_t vd, vuint32m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32m2_u64m1_tu(vuint64m1_t vd, vuint32m2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u32m2_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32m4_u64m1_tu(vuint64m1_t vd, vuint32m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32m4_u64m1_tu(vuint64m1_t vd, vuint32m4_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u32m4_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32m8_u64m1_tu(vuint64m1_t vd, vuint32m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32m8_u64m1_tu(vuint64m1_t vd, vuint32m8_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u32m8_u64m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8mf8_u16m1_tum(vbool64_t vm, vuint16m1_t vd, vuint8mf8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8mf8_u16m1_tum(vbool64_t vm, vuint16m1_t vd, + vuint8mf8_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u8mf8_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8mf4_u16m1_tum(vbool32_t vm, vuint16m1_t vd, vuint8mf4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8mf4_u16m1_tum(vbool32_t vm, vuint16m1_t vd, + vuint8mf4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u8mf4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8mf2_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8mf2_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u8mf2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8m1_u16m1_tum(vbool8_t vm, vuint16m1_t vd, vuint8m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8m1_u16m1_tum(vbool8_t vm, vuint16m1_t vd, + vuint8m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u8m1_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8m2_u16m1_tum(vbool4_t vm, vuint16m1_t vd, vuint8m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8m2_u16m1_tum(vbool4_t vm, vuint16m1_t vd, + vuint8m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u8m2_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8m4_u16m1_tum(vbool2_t vm, vuint16m1_t vd, vuint8m4_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8m4_u16m1_tum(vbool2_t vm, vuint16m1_t vd, + vuint8m4_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u8m4_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwredsumu_vs_u8m8_u16m1_tum(vbool1_t vm, vuint16m1_t vd, vuint8m8_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vwredsumu_vs_u8m8_u16m1_tum(vbool1_t vm, vuint16m1_t vd, + vuint8m8_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u8m8_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16mf4_u32m1_tum(vbool64_t vm, vuint32m1_t vd, vuint16mf4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16mf4_u32m1_tum(vbool64_t vm, vuint32m1_t vd, + vuint16mf4_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u16mf4_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16mf2_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16mf2_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u16mf2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16m1_u32m1_tum(vbool16_t vm, vuint32m1_t vd, vuint16m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16m1_u32m1_tum(vbool16_t vm, vuint32m1_t vd, + vuint16m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u16m1_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16m2_u32m1_tum(vbool8_t vm, vuint32m1_t vd, vuint16m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16m2_u32m1_tum(vbool8_t vm, vuint32m1_t vd, + vuint16m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u16m2_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16m4_u32m1_tum(vbool4_t vm, vuint32m1_t vd, vuint16m4_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16m4_u32m1_tum(vbool4_t vm, vuint32m1_t vd, + vuint16m4_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u16m4_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwredsumu_vs_u16m8_u32m1_tum(vbool2_t vm, vuint32m1_t vd, vuint16m8_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vwredsumu_vs_u16m8_u32m1_tum(vbool2_t vm, vuint32m1_t vd, + vuint16m8_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u16m8_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32mf2_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32mf2_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vwredsumu_vs_u32mf2_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32m1_u64m1_tum(vbool32_t vm, vuint64m1_t vd, vuint32m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32m1_u64m1_tum(vbool32_t vm, vuint64m1_t vd, + vuint32m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u32m1_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32m2_u64m1_tum(vbool16_t vm, vuint64m1_t vd, vuint32m2_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32m2_u64m1_tum(vbool16_t vm, vuint64m1_t vd, + vuint32m2_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u32m2_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32m4_u64m1_tum(vbool8_t vm, vuint64m1_t vd, vuint32m4_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32m4_u64m1_tum(vbool8_t vm, vuint64m1_t vd, + vuint32m4_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u32m4_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwredsumu_vs_u32m8_u64m1_tum(vbool4_t vm, vuint64m1_t vd, vuint32m8_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vwredsumu_vs_u32m8_u64m1_tum(vbool4_t vm, vuint64m1_t vd, + vuint32m8_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vwredsumu_vs_u32m8_u64m1_tum(vm, vd, vs2, vs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwsub.c b/auto-generated/policy_funcs/llvm-api-tests/vwsub.c index f48b90d1f..a7d33ee79 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwsub.c @@ -5,962 +5,1220 @@ #include -vint16mf4_t test_vwsub_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwsub_vv_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vwsub_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vwsub_vx_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwsub_vx_i16mf4_tu(vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vwsub_wv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwsub_wv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vwsub_wv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vwsub_wx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwsub_wx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_wx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vwsub_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwsub_vv_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vwsub_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vwsub_vx_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwsub_vx_i16mf2_tu(vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vwsub_wv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwsub_wv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vwsub_wv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vwsub_wx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwsub_wx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_wx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vwsub_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwsub_vv_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vwsub_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwsub_vx_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwsub_vx_i16m1_tu(vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vwsub_wv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwsub_wv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vwsub_wx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwsub_wx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_wx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vwsub_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwsub_vv_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vwsub_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vwsub_vx_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwsub_vx_i16m2_tu(vint16m2_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vwsub_wv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwsub_wv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vwsub_wx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwsub_wx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_wx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vwsub_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwsub_vv_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vwsub_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vwsub_vx_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwsub_vx_i16m4_tu(vint16m4_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vwsub_wv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwsub_wv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vwsub_wx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwsub_wx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_wx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vwsub_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwsub_vv_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vwsub_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vwsub_vx_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwsub_vx_i16m8_tu(vint16m8_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vwsub_wv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwsub_wv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vwsub_wx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwsub_wx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vwsub_wx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vwsub_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwsub_vv_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vwsub_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vwsub_vx_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwsub_vx_i32mf2_tu(vint32mf2_t vd, vint16mf4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vwsub_wv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwsub_wv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vwsub_wv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vwsub_wx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwsub_wx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vwsub_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwsub_vv_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwsub_vx_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwsub_vx_i32m1_tu(vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwsub_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vwsub_wv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwsub_wv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vwsub_wx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwsub_wx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwsub_wx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vwsub_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwsub_vv_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwsub_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vwsub_vx_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwsub_vx_i32m2_tu(vint32m2_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwsub_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vwsub_wv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwsub_wv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vwsub_wv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vwsub_wx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwsub_wx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwsub_wx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vwsub_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwsub_vv_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vwsub_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vwsub_vx_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwsub_vx_i32m4_tu(vint32m4_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwsub_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vwsub_wv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwsub_wv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vwsub_wv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vwsub_wx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwsub_wx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwsub_wx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vwsub_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwsub_vv_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vwsub_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vwsub_vx_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwsub_vx_i32m8_tu(vint32m8_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwsub_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vwsub_wv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwsub_wv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vwsub_wx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwsub_wx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vwsub_wx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vwsub_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwsub_vv_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwsub_vx_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwsub_vx_i64m1_tu(vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwsub_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vwsub_wv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwsub_wv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vwsub_wx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwsub_wx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwsub_wx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vwsub_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwsub_vv_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwsub_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vwsub_vx_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwsub_vx_i64m2_tu(vint64m2_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwsub_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vwsub_wv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwsub_wv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vwsub_wv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vwsub_wx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwsub_wx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwsub_wx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vwsub_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwsub_vv_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vwsub_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vwsub_vx_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwsub_vx_i64m4_tu(vint64m4_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwsub_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vwsub_wv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwsub_wv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vwsub_wv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vwsub_wx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwsub_wx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwsub_wx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vwsub_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwsub_vv_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vwsub_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vwsub_vx_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwsub_vx_i64m8_tu(vint64m8_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwsub_vx_i64m8_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vwsub_wv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwsub_wv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vwsub_wx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwsub_wx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vwsub_wx_i64m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vwsub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwsub_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwsub_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwsub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwsub_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwsub_wv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwsub_wv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwsub_wx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwsub_wx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwsub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwsub_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwsub_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwsub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwsub_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwsub_wv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwsub_wv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwsub_wx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwsub_wx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwsub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwsub_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwsub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwsub_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwsub_wv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwsub_wv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwsub_wx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwsub_wx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwsub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwsub_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwsub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwsub_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwsub_wv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwsub_wv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwsub_wx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwsub_wx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwsub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwsub_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwsub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwsub_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwsub_wv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwsub_wv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwsub_wx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwsub_wx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwsub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwsub_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwsub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwsub_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwsub_wv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwsub_wv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwsub_wx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwsub_wx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwsub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwsub_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwsub_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwsub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwsub_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwsub_wv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwsub_wv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwsub_wx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwsub_wx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwsub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwsub_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwsub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwsub_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwsub_wv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwsub_wv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwsub_wx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwsub_wx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwsub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwsub_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwsub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwsub_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwsub_wv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwsub_wv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwsub_wx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwsub_wx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwsub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwsub_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwsub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwsub_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwsub_wv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwsub_wv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwsub_wx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwsub_wx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwsub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwsub_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwsub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwsub_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwsub_wv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwsub_wv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwsub_wx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwsub_wx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwsub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwsub_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwsub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwsub_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwsub_wv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwsub_wv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwsub_wx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwsub_wx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwsub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwsub_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwsub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwsub_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwsub_wv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwsub_wv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwsub_wx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwsub_wx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwsub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwsub_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwsub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwsub_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwsub_wv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwsub_wv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwsub_wx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwsub_wx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwsub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwsub_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwsub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwsub_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwsub_wv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwsub_wv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwsub_wx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwsub_wx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwsub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwsub_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwsub_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwsub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwsub_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwsub_wv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwsub_wv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwsub_wx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwsub_wx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwsub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwsub_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwsub_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwsub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwsub_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwsub_wv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwsub_wv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwsub_wx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwsub_wx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwsub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwsub_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwsub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwsub_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwsub_wv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwsub_wv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwsub_wx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwsub_wx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwsub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwsub_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwsub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwsub_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwsub_wv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwsub_wv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwsub_wx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwsub_wx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwsub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwsub_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwsub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwsub_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwsub_wv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwsub_wv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwsub_wx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwsub_wx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwsub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwsub_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwsub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwsub_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwsub_wv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwsub_wv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwsub_wx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwsub_wx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwsub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwsub_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwsub_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwsub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwsub_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwsub_wv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwsub_wv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwsub_wx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwsub_wx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwsub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwsub_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vwsub_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwsub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwsub_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwsub_wv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwsub_wv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwsub_wx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwsub_wx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwsub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwsub_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwsub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwsub_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwsub_wv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwsub_wv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwsub_wx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwsub_wx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwsub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwsub_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwsub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwsub_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwsub_wv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwsub_wv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwsub_wx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwsub_wx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwsub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwsub_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwsub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwsub_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwsub_wv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwsub_wv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwsub_wx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwsub_wx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwsub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwsub_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vwsub_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwsub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwsub_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwsub_wv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwsub_wv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwsub_wx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwsub_wx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwsub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwsub_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwsub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwsub_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwsub_wv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwsub_wv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwsub_wx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwsub_wx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwsub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwsub_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwsub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwsub_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwsub_wv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwsub_wv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwsub_wx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwsub_wx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwsub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwsub_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwsub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwsub_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwsub_wv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwsub_wv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwsub_wx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwsub_wx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwsub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwsub_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { return __riscv_vwsub_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwsub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwsub_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint8mf8_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vwsub_wv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint8mf8_t vs1, size_t vl) { +vint16mf4_t test_vwsub_wv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vwsub_wx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf4_t test_vwsub_wx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwsub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwsub_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { return __riscv_vwsub_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwsub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwsub_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint8mf4_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vwsub_wv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint8mf4_t vs1, size_t vl) { +vint16mf2_t test_vwsub_wv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vwsub_wx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int8_t rs1, size_t vl) { +vint16mf2_t test_vwsub_wx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwsub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwsub_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwsub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwsub_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vwsub_wv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint8mf2_t vs1, size_t vl) { +vint16m1_t test_vwsub_wv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vwsub_wx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int8_t rs1, size_t vl) { +vint16m1_t test_vwsub_wx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwsub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwsub_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwsub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwsub_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vwsub_wv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint8m1_t vs1, size_t vl) { +vint16m2_t test_vwsub_wv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vwsub_wx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int8_t rs1, size_t vl) { +vint16m2_t test_vwsub_wx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwsub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwsub_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwsub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwsub_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vwsub_wv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint8m2_t vs1, size_t vl) { +vint16m4_t test_vwsub_wv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vwsub_wx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int8_t rs1, size_t vl) { +vint16m4_t test_vwsub_wx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwsub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwsub_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwsub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwsub_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vwsub_wv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint8m4_t vs1, size_t vl) { +vint16m8_t test_vwsub_wv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vwsub_wx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int8_t rs1, size_t vl) { +vint16m8_t test_vwsub_wx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vwsub_wx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwsub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwsub_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwsub_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwsub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwsub_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vwsub_wv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint16mf4_t vs1, size_t vl) { +vint32mf2_t test_vwsub_wv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vwsub_wv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vwsub_wx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int16_t rs1, size_t vl) { +vint32mf2_t test_vwsub_wx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwsub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwsub_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwsub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwsub_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint16mf2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vwsub_wv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint16mf2_t vs1, size_t vl) { +vint32m1_t test_vwsub_wv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vwsub_wx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int16_t rs1, size_t vl) { +vint32m1_t test_vwsub_wx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwsub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwsub_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwsub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwsub_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vwsub_wv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint16m1_t vs1, size_t vl) { +vint32m2_t test_vwsub_wv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vwsub_wx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int16_t rs1, size_t vl) { +vint32m2_t test_vwsub_wx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwsub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwsub_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwsub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwsub_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vwsub_wv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint16m2_t vs1, size_t vl) { +vint32m4_t test_vwsub_wv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vwsub_wx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int16_t rs1, size_t vl) { +vint32m4_t test_vwsub_wx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwsub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwsub_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwsub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwsub_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vwsub_wv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint16m4_t vs1, size_t vl) { +vint32m8_t test_vwsub_wv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vwsub_wx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int16_t rs1, size_t vl) { +vint32m8_t test_vwsub_wx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vwsub_wx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwsub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwsub_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwsub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwsub_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint32mf2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vwsub_wv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint32mf2_t vs1, size_t vl) { +vint64m1_t test_vwsub_wv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vwsub_wx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int32_t rs1, size_t vl) { +vint64m1_t test_vwsub_wx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwsub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwsub_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwsub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwsub_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vwsub_wv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint32m1_t vs1, size_t vl) { +vint64m2_t test_vwsub_wv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vwsub_wx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int32_t rs1, size_t vl) { +vint64m2_t test_vwsub_wx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwsub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwsub_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwsub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwsub_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vwsub_wv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint32m2_t vs1, size_t vl) { +vint64m4_t test_vwsub_wv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vwsub_wx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int32_t rs1, size_t vl) { +vint64m4_t test_vwsub_wx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwsub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwsub_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwsub_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwsub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwsub_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vwsub_wv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint32m4_t vs1, size_t vl) { +vint64m8_t test_vwsub_wv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vwsub_wv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vwsub_wx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int32_t rs1, size_t vl) { +vint64m8_t test_vwsub_wx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vwsub_wx_i64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwsubu.c b/auto-generated/policy_funcs/llvm-api-tests/vwsubu.c index f05fdf3bd..11fdc7f11 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwsubu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwsubu.c @@ -5,962 +5,1323 @@ #include -vuint16mf4_t test_vwsubu_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsubu_vv_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsubu_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwsubu_vx_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsubu_wv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsubu_wv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vwsubu_wv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsubu_wx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwsubu_wx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsubu_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsubu_vv_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsubu_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwsubu_vx_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsubu_wv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsubu_wv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vwsubu_wv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsubu_wx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwsubu_wx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwsubu_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsubu_vv_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwsubu_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwsubu_vx_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vwsubu_wv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsubu_wv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vwsubu_wv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vwsubu_wx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwsubu_wx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwsubu_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsubu_vv_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwsubu_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwsubu_vx_u16m2_tu(vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vwsubu_wv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsubu_wv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsubu_wv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vwsubu_wx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwsubu_wx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwsubu_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsubu_vv_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwsubu_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwsubu_vx_u16m4_tu(vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vwsubu_wv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsubu_wv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsubu_wv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vwsubu_wx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwsubu_wx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwsubu_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsubu_vv_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwsubu_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwsubu_vx_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vwsubu_wv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsubu_wv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsubu_wv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vwsubu_wx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwsubu_wx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsubu_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsubu_vv_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwsubu_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsubu_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwsubu_vx_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsubu_wv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsubu_wv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vwsubu_wv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsubu_wx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwsubu_wx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwsubu_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsubu_vv_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwsubu_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwsubu_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwsubu_vx_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vwsubu_wv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsubu_wv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vwsubu_wv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vwsubu_wx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwsubu_wx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwsubu_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsubu_vv_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwsubu_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwsubu_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwsubu_vx_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vwsubu_wv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsubu_wv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vwsubu_wv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vwsubu_wx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwsubu_wx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwsubu_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsubu_vv_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwsubu_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwsubu_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwsubu_vx_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vwsubu_wv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsubu_wv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vwsubu_wv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vwsubu_wx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwsubu_wx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwsubu_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsubu_vv_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwsubu_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwsubu_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwsubu_vx_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vwsubu_wv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsubu_wv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vwsubu_wv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vwsubu_wx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwsubu_wx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwsubu_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsubu_vv_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwsubu_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwsubu_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwsubu_vx_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vwsubu_wv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsubu_wv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vwsubu_wv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vwsubu_wx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwsubu_wx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwsubu_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsubu_vv_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwsubu_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwsubu_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwsubu_vx_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vwsubu_wv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsubu_wv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vwsubu_wv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vwsubu_wx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwsubu_wx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwsubu_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsubu_vv_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwsubu_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwsubu_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwsubu_vx_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vwsubu_wv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsubu_wv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vwsubu_wv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vwsubu_wx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwsubu_wx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwsubu_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsubu_vv_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwsubu_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwsubu_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwsubu_vx_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m8_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vwsubu_wv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsubu_wv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vwsubu_wv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vwsubu_wx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwsubu_wx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsubu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsubu_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsubu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwsubu_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsubu_wv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsubu_wv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsubu_wx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwsubu_wx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsubu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsubu_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsubu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwsubu_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsubu_wv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsubu_wv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsubu_wx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwsubu_wx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsubu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsubu_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsubu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwsubu_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsubu_wv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsubu_wv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsubu_wx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwsubu_wx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsubu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsubu_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsubu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwsubu_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsubu_wv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsubu_wv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsubu_wx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwsubu_wx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsubu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsubu_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsubu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwsubu_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsubu_wv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsubu_wv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsubu_wx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwsubu_wx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsubu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsubu_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsubu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwsubu_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsubu_wv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsubu_wv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsubu_wx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwsubu_wx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsubu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsubu_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsubu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwsubu_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsubu_wv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsubu_wv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsubu_wx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwsubu_wx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsubu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsubu_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsubu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwsubu_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsubu_wv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsubu_wv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsubu_wx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwsubu_wx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsubu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsubu_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsubu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwsubu_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsubu_wv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsubu_wv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsubu_wx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwsubu_wx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsubu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsubu_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsubu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwsubu_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsubu_wv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsubu_wv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsubu_wx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwsubu_wx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsubu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsubu_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsubu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwsubu_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsubu_wv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsubu_wv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsubu_wx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwsubu_wx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsubu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsubu_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsubu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwsubu_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsubu_wv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsubu_wv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsubu_wx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwsubu_wx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsubu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsubu_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsubu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwsubu_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsubu_wv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsubu_wv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsubu_wx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwsubu_wx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsubu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsubu_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsubu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwsubu_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsubu_wv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsubu_wv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsubu_wx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwsubu_wx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsubu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsubu_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsubu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwsubu_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsubu_wv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsubu_wv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsubu_wx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwsubu_wx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsubu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsubu_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsubu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwsubu_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsubu_wv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsubu_wv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsubu_wx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwsubu_wx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsubu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsubu_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsubu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwsubu_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsubu_wv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsubu_wv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsubu_wx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwsubu_wx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsubu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsubu_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsubu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwsubu_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsubu_wv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsubu_wv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsubu_wx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwsubu_wx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsubu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsubu_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsubu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwsubu_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsubu_wv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsubu_wv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsubu_wx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwsubu_wx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsubu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsubu_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsubu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwsubu_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsubu_wv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsubu_wv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsubu_wx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwsubu_wx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsubu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsubu_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsubu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwsubu_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsubu_wv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsubu_wv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsubu_wx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwsubu_wx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsubu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsubu_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsubu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwsubu_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsubu_wv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsubu_wv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsubu_wx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwsubu_wx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsubu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsubu_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsubu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwsubu_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsubu_wv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsubu_wv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsubu_wx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwsubu_wx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsubu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsubu_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsubu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwsubu_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsubu_wv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsubu_wv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsubu_wx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwsubu_wx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsubu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsubu_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsubu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwsubu_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsubu_wv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsubu_wv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsubu_wx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwsubu_wx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsubu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsubu_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsubu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwsubu_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsubu_wv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsubu_wv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsubu_wx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwsubu_wx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsubu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsubu_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsubu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwsubu_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsubu_wv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsubu_wv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsubu_wx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwsubu_wx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsubu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsubu_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsubu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwsubu_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsubu_wv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsubu_wv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsubu_wx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwsubu_wx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsubu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsubu_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsubu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwsubu_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsubu_wv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsubu_wv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsubu_wx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwsubu_wx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsubu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsubu_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsubu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwsubu_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsubu_wv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsubu_wv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsubu_wx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwsubu_wx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsubu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsubu_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsubu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwsubu_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vwsubu_wv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint16mf4_t test_vwsubu_wv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vwsubu_wx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf4_t test_vwsubu_wx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsubu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsubu_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsubu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwsubu_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vwsubu_wv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint16mf2_t test_vwsubu_wv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vwsubu_wx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16mf2_t test_vwsubu_wx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsubu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsubu_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsubu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwsubu_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vwsubu_wv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint16m1_t test_vwsubu_wv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vwsubu_wx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m1_t test_vwsubu_wx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsubu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsubu_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsubu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwsubu_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vwsubu_wv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint8m1_t vs1, size_t vl) { +vuint16m2_t test_vwsubu_wv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vwsubu_wx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m2_t test_vwsubu_wx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsubu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsubu_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsubu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwsubu_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vwsubu_wv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint8m2_t vs1, size_t vl) { +vuint16m4_t test_vwsubu_wv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vwsubu_wx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m4_t test_vwsubu_wx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsubu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsubu_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vwsubu_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsubu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwsubu_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vwsubu_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vwsubu_wv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint8m4_t vs1, size_t vl) { +vuint16m8_t test_vwsubu_wv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vwsubu_wx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint8_t rs1, size_t vl) { +vuint16m8_t test_vwsubu_wx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vwsubu_wx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsubu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsubu_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsubu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwsubu_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vwsubu_wv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint32mf2_t test_vwsubu_wv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vwsubu_wx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32mf2_t test_vwsubu_wx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vwsubu_wx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsubu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsubu_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsubu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwsubu_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vwsubu_wv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint32m1_t test_vwsubu_wv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vwsubu_wx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m1_t test_vwsubu_wx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsubu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsubu_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsubu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwsubu_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vwsubu_wv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint16m1_t vs1, size_t vl) { +vuint32m2_t test_vwsubu_wv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vwsubu_wx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m2_t test_vwsubu_wx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsubu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsubu_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsubu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwsubu_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vwsubu_wv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint16m2_t vs1, size_t vl) { +vuint32m4_t test_vwsubu_wv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vwsubu_wx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m4_t test_vwsubu_wx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsubu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsubu_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsubu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwsubu_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vwsubu_wv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint16m4_t vs1, size_t vl) { +vuint32m8_t test_vwsubu_wv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vwsubu_wx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint16_t rs1, size_t vl) { +vuint32m8_t test_vwsubu_wx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vwsubu_wx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsubu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsubu_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsubu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwsubu_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vwsubu_wv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint64m1_t test_vwsubu_wv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vwsubu_wx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m1_t test_vwsubu_wx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsubu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsubu_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsubu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwsubu_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vwsubu_wv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint32m1_t vs1, size_t vl) { +vuint64m2_t test_vwsubu_wv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vwsubu_wx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m2_t test_vwsubu_wx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsubu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsubu_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsubu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwsubu_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vwsubu_wv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint32m2_t vs1, size_t vl) { +vuint64m4_t test_vwsubu_wv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vwsubu_wx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m4_t test_vwsubu_wx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsubu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsubu_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsubu_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsubu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwsubu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vwsubu_wv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint32m4_t vs1, size_t vl) { +vuint64m8_t test_vwsubu_wv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vwsubu_wv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vwsubu_wx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint32_t rs1, size_t vl) { +vuint64m8_t test_vwsubu_wx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vwsubu_wx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vxor.c b/auto-generated/policy_funcs/llvm-api-tests/vxor.c index 84fd40fa4..e14c94abf 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vxor.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vxor.c @@ -5,1410 +5,1810 @@ #include -vint8mf8_t test_vxor_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vxor_vv_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, + size_t vl) { return __riscv_vxor_vv_i8mf8_tu(vd, vs2, vs1, vl); } -vint8mf8_t test_vxor_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vxor_vx_i8mf8_tu(vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vxor_vx_i8mf8_tu(vd, vs2, rs1, vl); } -vint8mf4_t test_vxor_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vxor_vv_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_i8mf4_tu(vd, vs2, vs1, vl); } -vint8mf4_t test_vxor_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vxor_vx_i8mf4_tu(vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vxor_vx_i8mf4_tu(vd, vs2, rs1, vl); } -vint8mf2_t test_vxor_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vxor_vv_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_i8mf2_tu(vd, vs2, vs1, vl); } -vint8mf2_t test_vxor_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vxor_vx_i8mf2_tu(vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vxor_vx_i8mf2_tu(vd, vs2, rs1, vl); } -vint8m1_t test_vxor_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vxor_vv_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, + size_t vl) { return __riscv_vxor_vv_i8m1_tu(vd, vs2, vs1, vl); } -vint8m1_t test_vxor_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vxor_vx_i8m1_tu(vint8m1_t vd, vint8m1_t vs2, int8_t rs1, + size_t vl) { return __riscv_vxor_vx_i8m1_tu(vd, vs2, rs1, vl); } -vint8m2_t test_vxor_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vxor_vv_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, + size_t vl) { return __riscv_vxor_vv_i8m2_tu(vd, vs2, vs1, vl); } -vint8m2_t test_vxor_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vxor_vx_i8m2_tu(vint8m2_t vd, vint8m2_t vs2, int8_t rs1, + size_t vl) { return __riscv_vxor_vx_i8m2_tu(vd, vs2, rs1, vl); } -vint8m4_t test_vxor_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vxor_vv_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, + size_t vl) { return __riscv_vxor_vv_i8m4_tu(vd, vs2, vs1, vl); } -vint8m4_t test_vxor_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vxor_vx_i8m4_tu(vint8m4_t vd, vint8m4_t vs2, int8_t rs1, + size_t vl) { return __riscv_vxor_vx_i8m4_tu(vd, vs2, rs1, vl); } -vint8m8_t test_vxor_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vxor_vv_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, + size_t vl) { return __riscv_vxor_vv_i8m8_tu(vd, vs2, vs1, vl); } -vint8m8_t test_vxor_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vxor_vx_i8m8_tu(vint8m8_t vd, vint8m8_t vs2, int8_t rs1, + size_t vl) { return __riscv_vxor_vx_i8m8_tu(vd, vs2, rs1, vl); } -vint16mf4_t test_vxor_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vxor_vv_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, + vint16mf4_t vs1, size_t vl) { return __riscv_vxor_vv_i16mf4_tu(vd, vs2, vs1, vl); } -vint16mf4_t test_vxor_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vxor_vx_i16mf4_tu(vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vxor_vx_i16mf4_tu(vd, vs2, rs1, vl); } -vint16mf2_t test_vxor_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vxor_vv_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, + vint16mf2_t vs1, size_t vl) { return __riscv_vxor_vv_i16mf2_tu(vd, vs2, vs1, vl); } -vint16mf2_t test_vxor_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vxor_vx_i16mf2_tu(vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vxor_vx_i16mf2_tu(vd, vs2, rs1, vl); } -vint16m1_t test_vxor_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vxor_vv_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, + size_t vl) { return __riscv_vxor_vv_i16m1_tu(vd, vs2, vs1, vl); } -vint16m1_t test_vxor_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vxor_vx_i16m1_tu(vint16m1_t vd, vint16m1_t vs2, int16_t rs1, + size_t vl) { return __riscv_vxor_vx_i16m1_tu(vd, vs2, rs1, vl); } -vint16m2_t test_vxor_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vxor_vv_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, + size_t vl) { return __riscv_vxor_vv_i16m2_tu(vd, vs2, vs1, vl); } -vint16m2_t test_vxor_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vxor_vx_i16m2_tu(vint16m2_t vd, vint16m2_t vs2, int16_t rs1, + size_t vl) { return __riscv_vxor_vx_i16m2_tu(vd, vs2, rs1, vl); } -vint16m4_t test_vxor_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vxor_vv_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, + size_t vl) { return __riscv_vxor_vv_i16m4_tu(vd, vs2, vs1, vl); } -vint16m4_t test_vxor_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vxor_vx_i16m4_tu(vint16m4_t vd, vint16m4_t vs2, int16_t rs1, + size_t vl) { return __riscv_vxor_vx_i16m4_tu(vd, vs2, rs1, vl); } -vint16m8_t test_vxor_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vxor_vv_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, + size_t vl) { return __riscv_vxor_vv_i16m8_tu(vd, vs2, vs1, vl); } -vint16m8_t test_vxor_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vxor_vx_i16m8_tu(vint16m8_t vd, vint16m8_t vs2, int16_t rs1, + size_t vl) { return __riscv_vxor_vx_i16m8_tu(vd, vs2, rs1, vl); } -vint32mf2_t test_vxor_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vxor_vv_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, + vint32mf2_t vs1, size_t vl) { return __riscv_vxor_vv_i32mf2_tu(vd, vs2, vs1, vl); } -vint32mf2_t test_vxor_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vxor_vx_i32mf2_tu(vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vxor_vx_i32mf2_tu(vd, vs2, rs1, vl); } -vint32m1_t test_vxor_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vxor_vv_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, + size_t vl) { return __riscv_vxor_vv_i32m1_tu(vd, vs2, vs1, vl); } -vint32m1_t test_vxor_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vxor_vx_i32m1_tu(vint32m1_t vd, vint32m1_t vs2, int32_t rs1, + size_t vl) { return __riscv_vxor_vx_i32m1_tu(vd, vs2, rs1, vl); } -vint32m2_t test_vxor_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vxor_vv_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, + size_t vl) { return __riscv_vxor_vv_i32m2_tu(vd, vs2, vs1, vl); } -vint32m2_t test_vxor_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vxor_vx_i32m2_tu(vint32m2_t vd, vint32m2_t vs2, int32_t rs1, + size_t vl) { return __riscv_vxor_vx_i32m2_tu(vd, vs2, rs1, vl); } -vint32m4_t test_vxor_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vxor_vv_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, + size_t vl) { return __riscv_vxor_vv_i32m4_tu(vd, vs2, vs1, vl); } -vint32m4_t test_vxor_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vxor_vx_i32m4_tu(vint32m4_t vd, vint32m4_t vs2, int32_t rs1, + size_t vl) { return __riscv_vxor_vx_i32m4_tu(vd, vs2, rs1, vl); } -vint32m8_t test_vxor_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vxor_vv_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, + size_t vl) { return __riscv_vxor_vv_i32m8_tu(vd, vs2, vs1, vl); } -vint32m8_t test_vxor_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vxor_vx_i32m8_tu(vint32m8_t vd, vint32m8_t vs2, int32_t rs1, + size_t vl) { return __riscv_vxor_vx_i32m8_tu(vd, vs2, rs1, vl); } -vint64m1_t test_vxor_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vxor_vv_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, + size_t vl) { return __riscv_vxor_vv_i64m1_tu(vd, vs2, vs1, vl); } -vint64m1_t test_vxor_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vxor_vx_i64m1_tu(vint64m1_t vd, vint64m1_t vs2, int64_t rs1, + size_t vl) { return __riscv_vxor_vx_i64m1_tu(vd, vs2, rs1, vl); } -vint64m2_t test_vxor_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vxor_vv_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, + size_t vl) { return __riscv_vxor_vv_i64m2_tu(vd, vs2, vs1, vl); } -vint64m2_t test_vxor_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vxor_vx_i64m2_tu(vint64m2_t vd, vint64m2_t vs2, int64_t rs1, + size_t vl) { return __riscv_vxor_vx_i64m2_tu(vd, vs2, rs1, vl); } -vint64m4_t test_vxor_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vxor_vv_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, + size_t vl) { return __riscv_vxor_vv_i64m4_tu(vd, vs2, vs1, vl); } -vint64m4_t test_vxor_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vxor_vx_i64m4_tu(vint64m4_t vd, vint64m4_t vs2, int64_t rs1, + size_t vl) { return __riscv_vxor_vx_i64m4_tu(vd, vs2, rs1, vl); } -vint64m8_t test_vxor_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vxor_vv_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, + size_t vl) { return __riscv_vxor_vv_i64m8_tu(vd, vs2, vs1, vl); } -vint64m8_t test_vxor_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vxor_vx_i64m8_tu(vint64m8_t vd, vint64m8_t vs2, int64_t rs1, + size_t vl) { return __riscv_vxor_vx_i64m8_tu(vd, vs2, rs1, vl); } -vuint8mf8_t test_vxor_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vxor_vv_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vxor_vv_u8mf8_tu(vd, vs2, vs1, vl); } -vuint8mf8_t test_vxor_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vxor_vx_u8mf8_tu(vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vxor_vx_u8mf8_tu(vd, vs2, rs1, vl); } -vuint8mf4_t test_vxor_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vxor_vv_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vxor_vv_u8mf4_tu(vd, vs2, vs1, vl); } -vuint8mf4_t test_vxor_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vxor_vx_u8mf4_tu(vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vxor_vx_u8mf4_tu(vd, vs2, rs1, vl); } -vuint8mf2_t test_vxor_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vxor_vv_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vxor_vv_u8mf2_tu(vd, vs2, vs1, vl); } -vuint8mf2_t test_vxor_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vxor_vx_u8mf2_tu(vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vxor_vx_u8mf2_tu(vd, vs2, rs1, vl); } -vuint8m1_t test_vxor_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vxor_vv_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, + size_t vl) { return __riscv_vxor_vv_u8m1_tu(vd, vs2, vs1, vl); } -vuint8m1_t test_vxor_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vxor_vx_u8m1_tu(vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vxor_vx_u8m1_tu(vd, vs2, rs1, vl); } -vuint8m2_t test_vxor_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vxor_vv_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, + size_t vl) { return __riscv_vxor_vv_u8m2_tu(vd, vs2, vs1, vl); } -vuint8m2_t test_vxor_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vxor_vx_u8m2_tu(vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vxor_vx_u8m2_tu(vd, vs2, rs1, vl); } -vuint8m4_t test_vxor_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vxor_vv_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, + size_t vl) { return __riscv_vxor_vv_u8m4_tu(vd, vs2, vs1, vl); } -vuint8m4_t test_vxor_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vxor_vx_u8m4_tu(vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vxor_vx_u8m4_tu(vd, vs2, rs1, vl); } -vuint8m8_t test_vxor_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vxor_vv_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, + size_t vl) { return __riscv_vxor_vv_u8m8_tu(vd, vs2, vs1, vl); } -vuint8m8_t test_vxor_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vxor_vx_u8m8_tu(vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, + size_t vl) { return __riscv_vxor_vx_u8m8_tu(vd, vs2, rs1, vl); } -vuint16mf4_t test_vxor_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vxor_vv_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + vuint16mf4_t vs1, size_t vl) { return __riscv_vxor_vv_u16mf4_tu(vd, vs2, vs1, vl); } -vuint16mf4_t test_vxor_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vxor_vx_u16mf4_tu(vuint16mf4_t vd, vuint16mf4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16mf4_tu(vd, vs2, rs1, vl); } -vuint16mf2_t test_vxor_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vxor_vv_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + vuint16mf2_t vs1, size_t vl) { return __riscv_vxor_vv_u16mf2_tu(vd, vs2, vs1, vl); } -vuint16mf2_t test_vxor_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vxor_vx_u16mf2_tu(vuint16mf2_t vd, vuint16mf2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16mf2_tu(vd, vs2, rs1, vl); } -vuint16m1_t test_vxor_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vxor_vv_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vxor_vv_u16m1_tu(vd, vs2, vs1, vl); } -vuint16m1_t test_vxor_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vxor_vx_u16m1_tu(vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vxor_vx_u16m1_tu(vd, vs2, rs1, vl); } -vuint16m2_t test_vxor_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vxor_vv_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vxor_vv_u16m2_tu(vd, vs2, vs1, vl); } -vuint16m2_t test_vxor_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vxor_vx_u16m2_tu(vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vxor_vx_u16m2_tu(vd, vs2, rs1, vl); } -vuint16m4_t test_vxor_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vxor_vv_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vxor_vv_u16m4_tu(vd, vs2, vs1, vl); } -vuint16m4_t test_vxor_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vxor_vx_u16m4_tu(vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vxor_vx_u16m4_tu(vd, vs2, rs1, vl); } -vuint16m8_t test_vxor_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vxor_vv_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vxor_vv_u16m8_tu(vd, vs2, vs1, vl); } -vuint16m8_t test_vxor_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vxor_vx_u16m8_tu(vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vxor_vx_u16m8_tu(vd, vs2, rs1, vl); } -vuint32mf2_t test_vxor_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vxor_vv_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + vuint32mf2_t vs1, size_t vl) { return __riscv_vxor_vv_u32mf2_tu(vd, vs2, vs1, vl); } -vuint32mf2_t test_vxor_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vxor_vx_u32mf2_tu(vuint32mf2_t vd, vuint32mf2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32mf2_tu(vd, vs2, rs1, vl); } -vuint32m1_t test_vxor_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vxor_vv_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vxor_vv_u32m1_tu(vd, vs2, vs1, vl); } -vuint32m1_t test_vxor_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vxor_vx_u32m1_tu(vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vxor_vx_u32m1_tu(vd, vs2, rs1, vl); } -vuint32m2_t test_vxor_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vxor_vv_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vxor_vv_u32m2_tu(vd, vs2, vs1, vl); } -vuint32m2_t test_vxor_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vxor_vx_u32m2_tu(vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vxor_vx_u32m2_tu(vd, vs2, rs1, vl); } -vuint32m4_t test_vxor_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vxor_vv_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vxor_vv_u32m4_tu(vd, vs2, vs1, vl); } -vuint32m4_t test_vxor_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vxor_vx_u32m4_tu(vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vxor_vx_u32m4_tu(vd, vs2, rs1, vl); } -vuint32m8_t test_vxor_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vxor_vv_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vxor_vv_u32m8_tu(vd, vs2, vs1, vl); } -vuint32m8_t test_vxor_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vxor_vx_u32m8_tu(vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vxor_vx_u32m8_tu(vd, vs2, rs1, vl); } -vuint64m1_t test_vxor_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vxor_vv_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vxor_vv_u64m1_tu(vd, vs2, vs1, vl); } -vuint64m1_t test_vxor_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vxor_vx_u64m1_tu(vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vxor_vx_u64m1_tu(vd, vs2, rs1, vl); } -vuint64m2_t test_vxor_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vxor_vv_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vxor_vv_u64m2_tu(vd, vs2, vs1, vl); } -vuint64m2_t test_vxor_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vxor_vx_u64m2_tu(vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vxor_vx_u64m2_tu(vd, vs2, rs1, vl); } -vuint64m4_t test_vxor_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vxor_vv_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vxor_vv_u64m4_tu(vd, vs2, vs1, vl); } -vuint64m4_t test_vxor_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vxor_vx_u64m4_tu(vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vxor_vx_u64m4_tu(vd, vs2, rs1, vl); } -vuint64m8_t test_vxor_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vxor_vv_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vxor_vv_u64m8_tu(vd, vs2, vs1, vl); } -vuint64m8_t test_vxor_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vxor_vx_u64m8_tu(vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, + size_t vl) { return __riscv_vxor_vx_u64m8_tu(vd, vs2, rs1, vl); } -vint8mf8_t test_vxor_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vxor_vv_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf8_tum(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vxor_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vxor_vx_i8mf8_tum(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf8_tum(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vxor_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vxor_vv_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf4_tum(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vxor_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vxor_vx_i8mf4_tum(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf4_tum(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vxor_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vxor_vv_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf2_tum(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vxor_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vxor_vx_i8mf2_tum(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf2_tum(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vxor_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vxor_vv_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vxor_vv_i8m1_tum(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vxor_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vxor_vx_i8m1_tum(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m1_tum(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vxor_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vxor_vv_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vxor_vv_i8m2_tum(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vxor_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vxor_vx_i8m2_tum(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m2_tum(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vxor_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vxor_vv_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vxor_vv_i8m4_tum(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vxor_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vxor_vx_i8m4_tum(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m4_tum(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vxor_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vxor_vv_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vxor_vv_i8m8_tum(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vxor_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vxor_vx_i8m8_tum(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m8_tum(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vxor_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vxor_vv_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_i16mf4_tum(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vxor_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vxor_vx_i16mf4_tum(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16mf4_tum(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vxor_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vxor_vv_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_i16mf2_tum(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vxor_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vxor_vx_i16mf2_tum(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16mf2_tum(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vxor_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vxor_vv_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vxor_vv_i16m1_tum(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vxor_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vxor_vx_i16m1_tum(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m1_tum(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vxor_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vxor_vv_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vxor_vv_i16m2_tum(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vxor_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vxor_vx_i16m2_tum(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m2_tum(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vxor_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vxor_vv_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vxor_vv_i16m4_tum(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vxor_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vxor_vx_i16m4_tum(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m4_tum(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vxor_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vxor_vv_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vxor_vv_i16m8_tum(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vxor_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vxor_vx_i16m8_tum(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m8_tum(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vxor_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vxor_vv_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_i32mf2_tum(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vxor_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vxor_vx_i32mf2_tum(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32mf2_tum(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vxor_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vxor_vv_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vxor_vv_i32m1_tum(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vxor_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vxor_vx_i32m1_tum(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m1_tum(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vxor_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vxor_vv_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vxor_vv_i32m2_tum(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vxor_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vxor_vx_i32m2_tum(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m2_tum(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vxor_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vxor_vv_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vxor_vv_i32m4_tum(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vxor_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vxor_vx_i32m4_tum(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m4_tum(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vxor_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vxor_vv_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vxor_vv_i32m8_tum(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vxor_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vxor_vx_i32m8_tum(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m8_tum(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vxor_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vxor_vv_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vxor_vv_i64m1_tum(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vxor_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vxor_vx_i64m1_tum(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m1_tum(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vxor_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vxor_vv_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vxor_vv_i64m2_tum(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vxor_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vxor_vx_i64m2_tum(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m2_tum(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vxor_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vxor_vv_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vxor_vv_i64m4_tum(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vxor_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vxor_vx_i64m4_tum(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m4_tum(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vxor_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vxor_vv_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vxor_vv_i64m8_tum(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vxor_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vxor_vx_i64m8_tum(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vxor_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vxor_vv_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vxor_vv_u8mf8_tum(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vxor_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vxor_vx_u8mf8_tum(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf8_tum(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vxor_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vxor_vv_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_u8mf4_tum(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vxor_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vxor_vx_u8mf4_tum(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf4_tum(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vxor_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vxor_vv_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_u8mf2_tum(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vxor_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vxor_vx_u8mf2_tum(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf2_tum(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vxor_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vxor_vv_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vxor_vv_u8m1_tum(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vxor_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vxor_vx_u8m1_tum(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m1_tum(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vxor_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vxor_vv_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vxor_vv_u8m2_tum(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vxor_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vxor_vx_u8m2_tum(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m2_tum(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vxor_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vxor_vv_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vxor_vv_u8m4_tum(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vxor_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vxor_vx_u8m4_tum(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m4_tum(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vxor_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vxor_vv_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vxor_vv_u8m8_tum(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vxor_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vxor_vx_u8m8_tum(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m8_tum(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vxor_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vxor_vv_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_u16mf4_tum(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vxor_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vxor_vx_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vxor_vx_u16mf4_tum(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vxor_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vxor_vv_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_u16mf2_tum(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vxor_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vxor_vx_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vxor_vx_u16mf2_tum(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vxor_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vxor_vv_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vxor_vv_u16m1_tum(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vxor_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vxor_vx_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m1_tum(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vxor_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vxor_vv_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vxor_vv_u16m2_tum(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vxor_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vxor_vx_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m2_tum(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vxor_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vxor_vv_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vxor_vv_u16m4_tum(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vxor_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vxor_vx_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m4_tum(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vxor_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vxor_vv_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vxor_vv_u16m8_tum(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vxor_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vxor_vx_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m8_tum(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vxor_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vxor_vv_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_u32mf2_tum(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vxor_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vxor_vx_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vxor_vx_u32mf2_tum(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vxor_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vxor_vv_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vxor_vv_u32m1_tum(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vxor_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vxor_vx_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m1_tum(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vxor_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vxor_vv_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vxor_vv_u32m2_tum(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vxor_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vxor_vx_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m2_tum(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vxor_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vxor_vv_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vxor_vv_u32m4_tum(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vxor_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vxor_vx_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m4_tum(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vxor_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vxor_vv_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vxor_vv_u32m8_tum(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vxor_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vxor_vx_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m8_tum(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vxor_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vxor_vv_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vxor_vv_u64m1_tum(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vxor_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vxor_vx_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m1_tum(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vxor_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vxor_vv_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vxor_vv_u64m2_tum(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vxor_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vxor_vx_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m2_tum(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vxor_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vxor_vv_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vxor_vv_u64m4_tum(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vxor_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vxor_vx_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m4_tum(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vxor_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vxor_vv_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vxor_vv_u64m8_tum(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vxor_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vxor_vx_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m8_tum(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vxor_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vxor_vv_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf8_tumu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vxor_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vxor_vx_i8mf8_tumu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vxor_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vxor_vv_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf4_tumu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vxor_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vxor_vx_i8mf4_tumu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf4_tumu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vxor_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vxor_vv_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf2_tumu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vxor_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vxor_vx_i8mf2_tumu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf2_tumu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vxor_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vxor_vv_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vxor_vv_i8m1_tumu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vxor_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vxor_vx_i8m1_tumu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m1_tumu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vxor_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vxor_vv_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vxor_vv_i8m2_tumu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vxor_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vxor_vx_i8m2_tumu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m2_tumu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vxor_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vxor_vv_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vxor_vv_i8m4_tumu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vxor_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vxor_vx_i8m4_tumu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m4_tumu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vxor_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vxor_vv_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vxor_vv_i8m8_tumu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vxor_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vxor_vx_i8m8_tumu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m8_tumu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vxor_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vxor_vv_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_i16mf4_tumu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vxor_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vxor_vx_i16mf4_tumu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16mf4_tumu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vxor_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vxor_vv_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_i16mf2_tumu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vxor_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vxor_vx_i16mf2_tumu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16mf2_tumu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vxor_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vxor_vv_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vxor_vv_i16m1_tumu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vxor_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vxor_vx_i16m1_tumu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m1_tumu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vxor_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vxor_vv_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vxor_vv_i16m2_tumu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vxor_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vxor_vx_i16m2_tumu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m2_tumu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vxor_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vxor_vv_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vxor_vv_i16m4_tumu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vxor_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vxor_vx_i16m4_tumu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m4_tumu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vxor_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vxor_vv_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vxor_vv_i16m8_tumu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vxor_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vxor_vx_i16m8_tumu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m8_tumu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vxor_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vxor_vv_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_i32mf2_tumu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vxor_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vxor_vx_i32mf2_tumu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32mf2_tumu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vxor_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vxor_vv_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vxor_vv_i32m1_tumu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vxor_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vxor_vx_i32m1_tumu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m1_tumu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vxor_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vxor_vv_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vxor_vv_i32m2_tumu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vxor_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vxor_vx_i32m2_tumu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m2_tumu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vxor_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vxor_vv_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vxor_vv_i32m4_tumu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vxor_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vxor_vx_i32m4_tumu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m4_tumu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vxor_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vxor_vv_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vxor_vv_i32m8_tumu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vxor_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vxor_vx_i32m8_tumu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m8_tumu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vxor_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vxor_vv_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vxor_vv_i64m1_tumu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vxor_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vxor_vx_i64m1_tumu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m1_tumu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vxor_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vxor_vv_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vxor_vv_i64m2_tumu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vxor_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vxor_vx_i64m2_tumu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m2_tumu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vxor_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vxor_vv_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vxor_vv_i64m4_tumu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vxor_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vxor_vx_i64m4_tumu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m4_tumu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vxor_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vxor_vv_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vxor_vv_i64m8_tumu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vxor_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vxor_vx_i64m8_tumu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vxor_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vxor_vv_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, vuint8mf8_t vs1, + size_t vl) { return __riscv_vxor_vv_u8mf8_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vxor_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vxor_vx_u8mf8_tumu(vbool64_t vm, vuint8mf8_t vd, + vuint8mf8_t vs2, uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf8_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vxor_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vxor_vv_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, vuint8mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_u8mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vxor_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vxor_vx_u8mf4_tumu(vbool32_t vm, vuint8mf4_t vd, + vuint8mf4_t vs2, uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vxor_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vxor_vv_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, vuint8mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_u8mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vxor_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vxor_vx_u8mf2_tumu(vbool16_t vm, vuint8mf2_t vd, + vuint8mf2_t vs2, uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vxor_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vxor_vv_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vxor_vv_u8m1_tumu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vxor_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vxor_vx_u8m1_tumu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m1_tumu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vxor_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vxor_vv_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vxor_vv_u8m2_tumu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vxor_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vxor_vx_u8m2_tumu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m2_tumu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vxor_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vxor_vv_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vxor_vv_u8m4_tumu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vxor_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vxor_vx_u8m4_tumu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m4_tumu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vxor_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vxor_vv_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vxor_vv_u8m8_tumu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vxor_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vxor_vx_u8m8_tumu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m8_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vxor_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vxor_vv_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_u16mf4_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vxor_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vxor_vx_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vxor_vx_u16mf4_tumu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vxor_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vxor_vv_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_u16mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vxor_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vxor_vx_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, + size_t vl) { return __riscv_vxor_vx_u16mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vxor_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vxor_vv_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, vuint16m1_t vs1, + size_t vl) { return __riscv_vxor_vv_u16m1_tumu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vxor_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vxor_vx_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint16m1_t vs2, uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m1_tumu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vxor_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vxor_vv_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, vuint16m2_t vs1, + size_t vl) { return __riscv_vxor_vv_u16m2_tumu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vxor_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vxor_vx_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint16m2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m2_tumu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vxor_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vxor_vv_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, vuint16m4_t vs1, + size_t vl) { return __riscv_vxor_vv_u16m4_tumu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vxor_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vxor_vx_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint16m4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m4_tumu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vxor_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vxor_vv_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, vuint16m8_t vs1, + size_t vl) { return __riscv_vxor_vv_u16m8_tumu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vxor_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vxor_vx_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint16m8_t vs2, uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m8_tumu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vxor_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vxor_vv_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_u32mf2_tumu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vxor_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vxor_vx_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, + size_t vl) { return __riscv_vxor_vx_u32mf2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vxor_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vxor_vv_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, vuint32m1_t vs1, + size_t vl) { return __riscv_vxor_vv_u32m1_tumu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vxor_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vxor_vx_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint32m1_t vs2, uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m1_tumu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vxor_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vxor_vv_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, vuint32m2_t vs1, + size_t vl) { return __riscv_vxor_vv_u32m2_tumu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vxor_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vxor_vx_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint32m2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m2_tumu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vxor_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vxor_vv_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, vuint32m4_t vs1, + size_t vl) { return __riscv_vxor_vv_u32m4_tumu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vxor_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vxor_vx_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint32m4_t vs2, uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m4_tumu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vxor_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vxor_vv_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, vuint32m8_t vs1, + size_t vl) { return __riscv_vxor_vv_u32m8_tumu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vxor_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vxor_vx_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint32m8_t vs2, uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m8_tumu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vxor_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vxor_vv_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, vuint64m1_t vs1, + size_t vl) { return __riscv_vxor_vv_u64m1_tumu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vxor_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vxor_vx_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint64m1_t vs2, uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m1_tumu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vxor_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vxor_vv_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, vuint64m2_t vs1, + size_t vl) { return __riscv_vxor_vv_u64m2_tumu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vxor_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vxor_vx_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint64m2_t vs2, uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m2_tumu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vxor_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vxor_vv_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, vuint64m4_t vs1, + size_t vl) { return __riscv_vxor_vv_u64m4_tumu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vxor_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vxor_vx_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint64m4_t vs2, uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m4_tumu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vxor_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vxor_vv_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, vuint64m8_t vs1, + size_t vl) { return __riscv_vxor_vv_u64m8_tumu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vxor_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vxor_vx_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint64m8_t vs2, uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m8_tumu(vm, vd, vs2, rs1, vl); } -vint8mf8_t test_vxor_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, vint8mf8_t vs1, size_t vl) { +vint8mf8_t test_vxor_vv_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + vint8mf8_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf8_mu(vm, vd, vs2, vs1, vl); } -vint8mf8_t test_vxor_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, int8_t rs1, size_t vl) { +vint8mf8_t test_vxor_vx_i8mf8_mu(vbool64_t vm, vint8mf8_t vd, vint8mf8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf8_mu(vm, vd, vs2, rs1, vl); } -vint8mf4_t test_vxor_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, vint8mf4_t vs1, size_t vl) { +vint8mf4_t test_vxor_vv_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + vint8mf4_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf4_mu(vm, vd, vs2, vs1, vl); } -vint8mf4_t test_vxor_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, int8_t rs1, size_t vl) { +vint8mf4_t test_vxor_vx_i8mf4_mu(vbool32_t vm, vint8mf4_t vd, vint8mf4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf4_mu(vm, vd, vs2, rs1, vl); } -vint8mf2_t test_vxor_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, vint8mf2_t vs1, size_t vl) { +vint8mf2_t test_vxor_vv_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + vint8mf2_t vs1, size_t vl) { return __riscv_vxor_vv_i8mf2_mu(vm, vd, vs2, vs1, vl); } -vint8mf2_t test_vxor_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, int8_t rs1, size_t vl) { +vint8mf2_t test_vxor_vx_i8mf2_mu(vbool16_t vm, vint8mf2_t vd, vint8mf2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8mf2_mu(vm, vd, vs2, rs1, vl); } -vint8m1_t test_vxor_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, vint8m1_t vs1, size_t vl) { +vint8m1_t test_vxor_vv_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + vint8m1_t vs1, size_t vl) { return __riscv_vxor_vv_i8m1_mu(vm, vd, vs2, vs1, vl); } -vint8m1_t test_vxor_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, int8_t rs1, size_t vl) { +vint8m1_t test_vxor_vx_i8m1_mu(vbool8_t vm, vint8m1_t vd, vint8m1_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m1_mu(vm, vd, vs2, rs1, vl); } -vint8m2_t test_vxor_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, vint8m2_t vs1, size_t vl) { +vint8m2_t test_vxor_vv_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + vint8m2_t vs1, size_t vl) { return __riscv_vxor_vv_i8m2_mu(vm, vd, vs2, vs1, vl); } -vint8m2_t test_vxor_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, int8_t rs1, size_t vl) { +vint8m2_t test_vxor_vx_i8m2_mu(vbool4_t vm, vint8m2_t vd, vint8m2_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m2_mu(vm, vd, vs2, rs1, vl); } -vint8m4_t test_vxor_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, vint8m4_t vs1, size_t vl) { +vint8m4_t test_vxor_vv_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + vint8m4_t vs1, size_t vl) { return __riscv_vxor_vv_i8m4_mu(vm, vd, vs2, vs1, vl); } -vint8m4_t test_vxor_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, int8_t rs1, size_t vl) { +vint8m4_t test_vxor_vx_i8m4_mu(vbool2_t vm, vint8m4_t vd, vint8m4_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m4_mu(vm, vd, vs2, rs1, vl); } -vint8m8_t test_vxor_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, vint8m8_t vs1, size_t vl) { +vint8m8_t test_vxor_vv_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + vint8m8_t vs1, size_t vl) { return __riscv_vxor_vv_i8m8_mu(vm, vd, vs2, vs1, vl); } -vint8m8_t test_vxor_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, int8_t rs1, size_t vl) { +vint8m8_t test_vxor_vx_i8m8_mu(vbool1_t vm, vint8m8_t vd, vint8m8_t vs2, + int8_t rs1, size_t vl) { return __riscv_vxor_vx_i8m8_mu(vm, vd, vs2, rs1, vl); } -vint16mf4_t test_vxor_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, vint16mf4_t vs1, size_t vl) { +vint16mf4_t test_vxor_vv_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, vint16mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_i16mf4_mu(vm, vd, vs2, vs1, vl); } -vint16mf4_t test_vxor_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, vint16mf4_t vs2, int16_t rs1, size_t vl) { +vint16mf4_t test_vxor_vx_i16mf4_mu(vbool64_t vm, vint16mf4_t vd, + vint16mf4_t vs2, int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16mf4_mu(vm, vd, vs2, rs1, vl); } -vint16mf2_t test_vxor_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, vint16mf2_t vs1, size_t vl) { +vint16mf2_t test_vxor_vv_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, vint16mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_i16mf2_mu(vm, vd, vs2, vs1, vl); } -vint16mf2_t test_vxor_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, vint16mf2_t vs2, int16_t rs1, size_t vl) { +vint16mf2_t test_vxor_vx_i16mf2_mu(vbool32_t vm, vint16mf2_t vd, + vint16mf2_t vs2, int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16mf2_mu(vm, vd, vs2, rs1, vl); } -vint16m1_t test_vxor_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, vint16m1_t vs1, size_t vl) { +vint16m1_t test_vxor_vv_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + vint16m1_t vs1, size_t vl) { return __riscv_vxor_vv_i16m1_mu(vm, vd, vs2, vs1, vl); } -vint16m1_t test_vxor_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, int16_t rs1, size_t vl) { +vint16m1_t test_vxor_vx_i16m1_mu(vbool16_t vm, vint16m1_t vd, vint16m1_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m1_mu(vm, vd, vs2, rs1, vl); } -vint16m2_t test_vxor_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, vint16m2_t vs1, size_t vl) { +vint16m2_t test_vxor_vv_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + vint16m2_t vs1, size_t vl) { return __riscv_vxor_vv_i16m2_mu(vm, vd, vs2, vs1, vl); } -vint16m2_t test_vxor_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, int16_t rs1, size_t vl) { +vint16m2_t test_vxor_vx_i16m2_mu(vbool8_t vm, vint16m2_t vd, vint16m2_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m2_mu(vm, vd, vs2, rs1, vl); } -vint16m4_t test_vxor_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, vint16m4_t vs1, size_t vl) { +vint16m4_t test_vxor_vv_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + vint16m4_t vs1, size_t vl) { return __riscv_vxor_vv_i16m4_mu(vm, vd, vs2, vs1, vl); } -vint16m4_t test_vxor_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, int16_t rs1, size_t vl) { +vint16m4_t test_vxor_vx_i16m4_mu(vbool4_t vm, vint16m4_t vd, vint16m4_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m4_mu(vm, vd, vs2, rs1, vl); } -vint16m8_t test_vxor_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, vint16m8_t vs1, size_t vl) { +vint16m8_t test_vxor_vv_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + vint16m8_t vs1, size_t vl) { return __riscv_vxor_vv_i16m8_mu(vm, vd, vs2, vs1, vl); } -vint16m8_t test_vxor_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, int16_t rs1, size_t vl) { +vint16m8_t test_vxor_vx_i16m8_mu(vbool2_t vm, vint16m8_t vd, vint16m8_t vs2, + int16_t rs1, size_t vl) { return __riscv_vxor_vx_i16m8_mu(vm, vd, vs2, rs1, vl); } -vint32mf2_t test_vxor_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, vint32mf2_t vs1, size_t vl) { +vint32mf2_t test_vxor_vv_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, vint32mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_i32mf2_mu(vm, vd, vs2, vs1, vl); } -vint32mf2_t test_vxor_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, vint32mf2_t vs2, int32_t rs1, size_t vl) { +vint32mf2_t test_vxor_vx_i32mf2_mu(vbool64_t vm, vint32mf2_t vd, + vint32mf2_t vs2, int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32mf2_mu(vm, vd, vs2, rs1, vl); } -vint32m1_t test_vxor_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, vint32m1_t vs1, size_t vl) { +vint32m1_t test_vxor_vv_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + vint32m1_t vs1, size_t vl) { return __riscv_vxor_vv_i32m1_mu(vm, vd, vs2, vs1, vl); } -vint32m1_t test_vxor_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, int32_t rs1, size_t vl) { +vint32m1_t test_vxor_vx_i32m1_mu(vbool32_t vm, vint32m1_t vd, vint32m1_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m1_mu(vm, vd, vs2, rs1, vl); } -vint32m2_t test_vxor_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, vint32m2_t vs1, size_t vl) { +vint32m2_t test_vxor_vv_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + vint32m2_t vs1, size_t vl) { return __riscv_vxor_vv_i32m2_mu(vm, vd, vs2, vs1, vl); } -vint32m2_t test_vxor_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, int32_t rs1, size_t vl) { +vint32m2_t test_vxor_vx_i32m2_mu(vbool16_t vm, vint32m2_t vd, vint32m2_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m2_mu(vm, vd, vs2, rs1, vl); } -vint32m4_t test_vxor_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, vint32m4_t vs1, size_t vl) { +vint32m4_t test_vxor_vv_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + vint32m4_t vs1, size_t vl) { return __riscv_vxor_vv_i32m4_mu(vm, vd, vs2, vs1, vl); } -vint32m4_t test_vxor_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, int32_t rs1, size_t vl) { +vint32m4_t test_vxor_vx_i32m4_mu(vbool8_t vm, vint32m4_t vd, vint32m4_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m4_mu(vm, vd, vs2, rs1, vl); } -vint32m8_t test_vxor_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, vint32m8_t vs1, size_t vl) { +vint32m8_t test_vxor_vv_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + vint32m8_t vs1, size_t vl) { return __riscv_vxor_vv_i32m8_mu(vm, vd, vs2, vs1, vl); } -vint32m8_t test_vxor_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, int32_t rs1, size_t vl) { +vint32m8_t test_vxor_vx_i32m8_mu(vbool4_t vm, vint32m8_t vd, vint32m8_t vs2, + int32_t rs1, size_t vl) { return __riscv_vxor_vx_i32m8_mu(vm, vd, vs2, rs1, vl); } -vint64m1_t test_vxor_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, vint64m1_t vs1, size_t vl) { +vint64m1_t test_vxor_vv_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + vint64m1_t vs1, size_t vl) { return __riscv_vxor_vv_i64m1_mu(vm, vd, vs2, vs1, vl); } -vint64m1_t test_vxor_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, int64_t rs1, size_t vl) { +vint64m1_t test_vxor_vx_i64m1_mu(vbool64_t vm, vint64m1_t vd, vint64m1_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m1_mu(vm, vd, vs2, rs1, vl); } -vint64m2_t test_vxor_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, vint64m2_t vs1, size_t vl) { +vint64m2_t test_vxor_vv_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + vint64m2_t vs1, size_t vl) { return __riscv_vxor_vv_i64m2_mu(vm, vd, vs2, vs1, vl); } -vint64m2_t test_vxor_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, int64_t rs1, size_t vl) { +vint64m2_t test_vxor_vx_i64m2_mu(vbool32_t vm, vint64m2_t vd, vint64m2_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m2_mu(vm, vd, vs2, rs1, vl); } -vint64m4_t test_vxor_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, vint64m4_t vs1, size_t vl) { +vint64m4_t test_vxor_vv_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + vint64m4_t vs1, size_t vl) { return __riscv_vxor_vv_i64m4_mu(vm, vd, vs2, vs1, vl); } -vint64m4_t test_vxor_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, int64_t rs1, size_t vl) { +vint64m4_t test_vxor_vx_i64m4_mu(vbool16_t vm, vint64m4_t vd, vint64m4_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m4_mu(vm, vd, vs2, rs1, vl); } -vint64m8_t test_vxor_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, vint64m8_t vs1, size_t vl) { +vint64m8_t test_vxor_vv_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + vint64m8_t vs1, size_t vl) { return __riscv_vxor_vv_i64m8_mu(vm, vd, vs2, vs1, vl); } -vint64m8_t test_vxor_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, int64_t rs1, size_t vl) { +vint64m8_t test_vxor_vx_i64m8_mu(vbool8_t vm, vint64m8_t vd, vint64m8_t vs2, + int64_t rs1, size_t vl) { return __riscv_vxor_vx_i64m8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf8_t test_vxor_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, vuint8mf8_t vs1, size_t vl) { +vuint8mf8_t test_vxor_vv_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + vuint8mf8_t vs1, size_t vl) { return __riscv_vxor_vv_u8mf8_mu(vm, vd, vs2, vs1, vl); } -vuint8mf8_t test_vxor_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, uint8_t rs1, size_t vl) { +vuint8mf8_t test_vxor_vx_u8mf8_mu(vbool64_t vm, vuint8mf8_t vd, vuint8mf8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf8_mu(vm, vd, vs2, rs1, vl); } -vuint8mf4_t test_vxor_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, vuint8mf4_t vs1, size_t vl) { +vuint8mf4_t test_vxor_vv_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + vuint8mf4_t vs1, size_t vl) { return __riscv_vxor_vv_u8mf4_mu(vm, vd, vs2, vs1, vl); } -vuint8mf4_t test_vxor_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, uint8_t rs1, size_t vl) { +vuint8mf4_t test_vxor_vx_u8mf4_mu(vbool32_t vm, vuint8mf4_t vd, vuint8mf4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf4_mu(vm, vd, vs2, rs1, vl); } -vuint8mf2_t test_vxor_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, vuint8mf2_t vs1, size_t vl) { +vuint8mf2_t test_vxor_vv_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + vuint8mf2_t vs1, size_t vl) { return __riscv_vxor_vv_u8mf2_mu(vm, vd, vs2, vs1, vl); } -vuint8mf2_t test_vxor_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, uint8_t rs1, size_t vl) { +vuint8mf2_t test_vxor_vx_u8mf2_mu(vbool16_t vm, vuint8mf2_t vd, vuint8mf2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8mf2_mu(vm, vd, vs2, rs1, vl); } -vuint8m1_t test_vxor_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, vuint8m1_t vs1, size_t vl) { +vuint8m1_t test_vxor_vv_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + vuint8m1_t vs1, size_t vl) { return __riscv_vxor_vv_u8m1_mu(vm, vd, vs2, vs1, vl); } -vuint8m1_t test_vxor_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, uint8_t rs1, size_t vl) { +vuint8m1_t test_vxor_vx_u8m1_mu(vbool8_t vm, vuint8m1_t vd, vuint8m1_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m1_mu(vm, vd, vs2, rs1, vl); } -vuint8m2_t test_vxor_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, vuint8m2_t vs1, size_t vl) { +vuint8m2_t test_vxor_vv_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + vuint8m2_t vs1, size_t vl) { return __riscv_vxor_vv_u8m2_mu(vm, vd, vs2, vs1, vl); } -vuint8m2_t test_vxor_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, uint8_t rs1, size_t vl) { +vuint8m2_t test_vxor_vx_u8m2_mu(vbool4_t vm, vuint8m2_t vd, vuint8m2_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m2_mu(vm, vd, vs2, rs1, vl); } -vuint8m4_t test_vxor_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, vuint8m4_t vs1, size_t vl) { +vuint8m4_t test_vxor_vv_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + vuint8m4_t vs1, size_t vl) { return __riscv_vxor_vv_u8m4_mu(vm, vd, vs2, vs1, vl); } -vuint8m4_t test_vxor_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, uint8_t rs1, size_t vl) { +vuint8m4_t test_vxor_vx_u8m4_mu(vbool2_t vm, vuint8m4_t vd, vuint8m4_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m4_mu(vm, vd, vs2, rs1, vl); } -vuint8m8_t test_vxor_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, vuint8m8_t vs1, size_t vl) { +vuint8m8_t test_vxor_vv_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + vuint8m8_t vs1, size_t vl) { return __riscv_vxor_vv_u8m8_mu(vm, vd, vs2, vs1, vl); } -vuint8m8_t test_vxor_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, uint8_t rs1, size_t vl) { +vuint8m8_t test_vxor_vx_u8m8_mu(vbool1_t vm, vuint8m8_t vd, vuint8m8_t vs2, + uint8_t rs1, size_t vl) { return __riscv_vxor_vx_u8m8_mu(vm, vd, vs2, rs1, vl); } -vuint16mf4_t test_vxor_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, vuint16mf4_t vs1, size_t vl) { +vuint16mf4_t test_vxor_vv_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, vuint16mf4_t vs1, + size_t vl) { return __riscv_vxor_vv_u16mf4_mu(vm, vd, vs2, vs1, vl); } -vuint16mf4_t test_vxor_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint16mf4_t vs2, uint16_t rs1, size_t vl) { +vuint16mf4_t test_vxor_vx_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint16mf4_t vs2, uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16mf4_mu(vm, vd, vs2, rs1, vl); } -vuint16mf2_t test_vxor_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, vuint16mf2_t vs1, size_t vl) { +vuint16mf2_t test_vxor_vv_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, vuint16mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_u16mf2_mu(vm, vd, vs2, vs1, vl); } -vuint16mf2_t test_vxor_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint16mf2_t vs2, uint16_t rs1, size_t vl) { +vuint16mf2_t test_vxor_vx_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint16mf2_t vs2, uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16mf2_mu(vm, vd, vs2, rs1, vl); } -vuint16m1_t test_vxor_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, vuint16m1_t vs1, size_t vl) { +vuint16m1_t test_vxor_vv_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + vuint16m1_t vs1, size_t vl) { return __riscv_vxor_vv_u16m1_mu(vm, vd, vs2, vs1, vl); } -vuint16m1_t test_vxor_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, uint16_t rs1, size_t vl) { +vuint16m1_t test_vxor_vx_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint16m1_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m1_mu(vm, vd, vs2, rs1, vl); } -vuint16m2_t test_vxor_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, vuint16m2_t vs1, size_t vl) { +vuint16m2_t test_vxor_vv_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + vuint16m2_t vs1, size_t vl) { return __riscv_vxor_vv_u16m2_mu(vm, vd, vs2, vs1, vl); } -vuint16m2_t test_vxor_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, uint16_t rs1, size_t vl) { +vuint16m2_t test_vxor_vx_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint16m2_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m2_mu(vm, vd, vs2, rs1, vl); } -vuint16m4_t test_vxor_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, vuint16m4_t vs1, size_t vl) { +vuint16m4_t test_vxor_vv_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + vuint16m4_t vs1, size_t vl) { return __riscv_vxor_vv_u16m4_mu(vm, vd, vs2, vs1, vl); } -vuint16m4_t test_vxor_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, uint16_t rs1, size_t vl) { +vuint16m4_t test_vxor_vx_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint16m4_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m4_mu(vm, vd, vs2, rs1, vl); } -vuint16m8_t test_vxor_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, vuint16m8_t vs1, size_t vl) { +vuint16m8_t test_vxor_vv_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + vuint16m8_t vs1, size_t vl) { return __riscv_vxor_vv_u16m8_mu(vm, vd, vs2, vs1, vl); } -vuint16m8_t test_vxor_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, uint16_t rs1, size_t vl) { +vuint16m8_t test_vxor_vx_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint16m8_t vs2, + uint16_t rs1, size_t vl) { return __riscv_vxor_vx_u16m8_mu(vm, vd, vs2, rs1, vl); } -vuint32mf2_t test_vxor_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, vuint32mf2_t vs1, size_t vl) { +vuint32mf2_t test_vxor_vv_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, vuint32mf2_t vs1, + size_t vl) { return __riscv_vxor_vv_u32mf2_mu(vm, vd, vs2, vs1, vl); } -vuint32mf2_t test_vxor_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint32mf2_t vs2, uint32_t rs1, size_t vl) { +vuint32mf2_t test_vxor_vx_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint32mf2_t vs2, uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32mf2_mu(vm, vd, vs2, rs1, vl); } -vuint32m1_t test_vxor_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, vuint32m1_t vs1, size_t vl) { +vuint32m1_t test_vxor_vv_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + vuint32m1_t vs1, size_t vl) { return __riscv_vxor_vv_u32m1_mu(vm, vd, vs2, vs1, vl); } -vuint32m1_t test_vxor_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, uint32_t rs1, size_t vl) { +vuint32m1_t test_vxor_vx_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint32m1_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m1_mu(vm, vd, vs2, rs1, vl); } -vuint32m2_t test_vxor_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, vuint32m2_t vs1, size_t vl) { +vuint32m2_t test_vxor_vv_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + vuint32m2_t vs1, size_t vl) { return __riscv_vxor_vv_u32m2_mu(vm, vd, vs2, vs1, vl); } -vuint32m2_t test_vxor_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, uint32_t rs1, size_t vl) { +vuint32m2_t test_vxor_vx_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint32m2_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m2_mu(vm, vd, vs2, rs1, vl); } -vuint32m4_t test_vxor_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, vuint32m4_t vs1, size_t vl) { +vuint32m4_t test_vxor_vv_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + vuint32m4_t vs1, size_t vl) { return __riscv_vxor_vv_u32m4_mu(vm, vd, vs2, vs1, vl); } -vuint32m4_t test_vxor_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, uint32_t rs1, size_t vl) { +vuint32m4_t test_vxor_vx_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint32m4_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m4_mu(vm, vd, vs2, rs1, vl); } -vuint32m8_t test_vxor_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, vuint32m8_t vs1, size_t vl) { +vuint32m8_t test_vxor_vv_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + vuint32m8_t vs1, size_t vl) { return __riscv_vxor_vv_u32m8_mu(vm, vd, vs2, vs1, vl); } -vuint32m8_t test_vxor_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, uint32_t rs1, size_t vl) { +vuint32m8_t test_vxor_vx_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint32m8_t vs2, + uint32_t rs1, size_t vl) { return __riscv_vxor_vx_u32m8_mu(vm, vd, vs2, rs1, vl); } -vuint64m1_t test_vxor_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, vuint64m1_t vs1, size_t vl) { +vuint64m1_t test_vxor_vv_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + vuint64m1_t vs1, size_t vl) { return __riscv_vxor_vv_u64m1_mu(vm, vd, vs2, vs1, vl); } -vuint64m1_t test_vxor_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, uint64_t rs1, size_t vl) { +vuint64m1_t test_vxor_vx_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint64m1_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m1_mu(vm, vd, vs2, rs1, vl); } -vuint64m2_t test_vxor_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, vuint64m2_t vs1, size_t vl) { +vuint64m2_t test_vxor_vv_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + vuint64m2_t vs1, size_t vl) { return __riscv_vxor_vv_u64m2_mu(vm, vd, vs2, vs1, vl); } -vuint64m2_t test_vxor_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, uint64_t rs1, size_t vl) { +vuint64m2_t test_vxor_vx_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint64m2_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m2_mu(vm, vd, vs2, rs1, vl); } -vuint64m4_t test_vxor_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, vuint64m4_t vs1, size_t vl) { +vuint64m4_t test_vxor_vv_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + vuint64m4_t vs1, size_t vl) { return __riscv_vxor_vv_u64m4_mu(vm, vd, vs2, vs1, vl); } -vuint64m4_t test_vxor_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, uint64_t rs1, size_t vl) { +vuint64m4_t test_vxor_vx_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint64m4_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m4_mu(vm, vd, vs2, rs1, vl); } -vuint64m8_t test_vxor_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, vuint64m8_t vs1, size_t vl) { +vuint64m8_t test_vxor_vv_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + vuint64m8_t vs1, size_t vl) { return __riscv_vxor_vv_u64m8_mu(vm, vd, vs2, vs1, vl); } -vuint64m8_t test_vxor_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, uint64_t rs1, size_t vl) { +vuint64m8_t test_vxor_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, + uint64_t rs1, size_t vl) { return __riscv_vxor_vx_u64m8_mu(vm, vd, vs2, rs1, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vzext_vf2.c b/auto-generated/policy_funcs/llvm-api-tests/vzext_vf2.c index d323476dd..f7d9e6e7f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vzext_vf2.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vzext_vf2.c @@ -5,15 +5,18 @@ #include -vuint16mf4_t test_vzext_vf2_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vzext_vf2_u16mf4_tu(vuint16mf4_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vzext_vf2_u16mf4_tu(vd, vs2, vl); } -vuint16mf2_t test_vzext_vf2_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vzext_vf2_u16mf2_tu(vuint16mf2_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vzext_vf2_u16mf2_tu(vd, vs2, vl); } -vuint16m1_t test_vzext_vf2_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vzext_vf2_u16m1_tu(vuint16m1_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vzext_vf2_u16m1_tu(vd, vs2, vl); } @@ -29,218 +32,272 @@ vuint16m8_t test_vzext_vf2_u16m8_tu(vuint16m8_t vd, vuint8m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vzext_vf2_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vzext_vf2_u32mf2_tu(vuint32mf2_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vzext_vf2_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vzext_vf2_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vzext_vf2_u32m1_tu(vuint32m1_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vzext_vf2_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vzext_vf2_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vzext_vf2_u32m2_tu(vuint32m2_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vzext_vf2_u32m2_tu(vd, vs2, vl); } -vuint32m4_t test_vzext_vf2_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vzext_vf2_u32m4_tu(vuint32m4_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vzext_vf2_u32m4_tu(vd, vs2, vl); } -vuint32m8_t test_vzext_vf2_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vzext_vf2_u32m8_tu(vuint32m8_t vd, vuint16m4_t vs2, + size_t vl) { return __riscv_vzext_vf2_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vzext_vf2_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf2_u64m1_tu(vuint64m1_t vd, vuint32mf2_t vs2, + size_t vl) { return __riscv_vzext_vf2_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vzext_vf2_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf2_u64m2_tu(vuint64m2_t vd, vuint32m1_t vs2, + size_t vl) { return __riscv_vzext_vf2_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vzext_vf2_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf2_u64m4_tu(vuint64m4_t vd, vuint32m2_t vs2, + size_t vl) { return __riscv_vzext_vf2_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vzext_vf2_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf2_u64m8_tu(vuint64m8_t vd, vuint32m4_t vs2, + size_t vl) { return __riscv_vzext_vf2_u64m8_tu(vd, vs2, vl); } -vuint16mf4_t test_vzext_vf2_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vzext_vf2_u16mf4_tum(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf2_u16mf4_tum(vm, vd, vs2, vl); } -vuint16mf2_t test_vzext_vf2_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vzext_vf2_u16mf2_tum(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf2_u16mf2_tum(vm, vd, vs2, vl); } -vuint16m1_t test_vzext_vf2_u16m1_tum(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vzext_vf2_u16m1_tum(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m1_tum(vm, vd, vs2, vl); } -vuint16m2_t test_vzext_vf2_u16m2_tum(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vzext_vf2_u16m2_tum(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m2_tum(vm, vd, vs2, vl); } -vuint16m4_t test_vzext_vf2_u16m4_tum(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vzext_vf2_u16m4_tum(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m4_tum(vm, vd, vs2, vl); } -vuint16m8_t test_vzext_vf2_u16m8_tum(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vzext_vf2_u16m8_tum(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vzext_vf2_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vzext_vf2_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vzext_vf2_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vzext_vf2_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vzext_vf2_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vzext_vf2_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vzext_vf2_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vzext_vf2_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vzext_vf2_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vzext_vf2_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vzext_vf2_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vzext_vf2_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf2_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf2_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf2_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf2_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf2_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf2_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf2_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m8_tum(vm, vd, vs2, vl); } -vuint16mf4_t test_vzext_vf2_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vzext_vf2_u16mf4_tumu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf2_u16mf4_tumu(vm, vd, vs2, vl); } -vuint16mf2_t test_vzext_vf2_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vzext_vf2_u16mf2_tumu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf2_u16mf2_tumu(vm, vd, vs2, vl); } -vuint16m1_t test_vzext_vf2_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vzext_vf2_u16m1_tumu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m1_tumu(vm, vd, vs2, vl); } -vuint16m2_t test_vzext_vf2_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vzext_vf2_u16m2_tumu(vbool8_t vm, vuint16m2_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m2_tumu(vm, vd, vs2, vl); } -vuint16m4_t test_vzext_vf2_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vzext_vf2_u16m4_tumu(vbool4_t vm, vuint16m4_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m4_tumu(vm, vd, vs2, vl); } -vuint16m8_t test_vzext_vf2_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vzext_vf2_u16m8_tumu(vbool2_t vm, vuint16m8_t vd, + vuint8m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vzext_vf2_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vzext_vf2_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vzext_vf2_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vzext_vf2_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vzext_vf2_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vzext_vf2_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vzext_vf2_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vzext_vf2_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vzext_vf2_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vzext_vf2_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vzext_vf2_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vzext_vf2_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf2_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf2_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf2_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf2_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf2_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf2_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf2_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m8_tumu(vm, vd, vs2, vl); } -vuint16mf4_t test_vzext_vf2_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, vuint8mf8_t vs2, size_t vl) { +vuint16mf4_t test_vzext_vf2_u16mf4_mu(vbool64_t vm, vuint16mf4_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf2_u16mf4_mu(vm, vd, vs2, vl); } -vuint16mf2_t test_vzext_vf2_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint16mf2_t test_vzext_vf2_u16mf2_mu(vbool32_t vm, vuint16mf2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf2_u16mf2_mu(vm, vd, vs2, vl); } -vuint16m1_t test_vzext_vf2_u16m1_mu(vbool16_t vm, vuint16m1_t vd, vuint8mf2_t vs2, size_t vl) { +vuint16m1_t test_vzext_vf2_u16m1_mu(vbool16_t vm, vuint16m1_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u16m1_mu(vm, vd, vs2, vl); } -vuint16m2_t test_vzext_vf2_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, size_t vl) { +vuint16m2_t test_vzext_vf2_u16m2_mu(vbool8_t vm, vuint16m2_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vzext_vf2_u16m2_mu(vm, vd, vs2, vl); } -vuint16m4_t test_vzext_vf2_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, size_t vl) { +vuint16m4_t test_vzext_vf2_u16m4_mu(vbool4_t vm, vuint16m4_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vzext_vf2_u16m4_mu(vm, vd, vs2, vl); } -vuint16m8_t test_vzext_vf2_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, size_t vl) { +vuint16m8_t test_vzext_vf2_u16m8_mu(vbool2_t vm, vuint16m8_t vd, vuint8m4_t vs2, + size_t vl) { return __riscv_vzext_vf2_u16m8_mu(vm, vd, vs2, vl); } -vuint32mf2_t test_vzext_vf2_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint16mf4_t vs2, size_t vl) { +vuint32mf2_t test_vzext_vf2_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vzext_vf2_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vzext_vf2_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint16mf2_t vs2, size_t vl) { +vuint32m1_t test_vzext_vf2_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vzext_vf2_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint16m1_t vs2, size_t vl) { +vuint32m2_t test_vzext_vf2_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vzext_vf2_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint16m2_t vs2, size_t vl) { +vuint32m4_t test_vzext_vf2_u32m4_mu(vbool8_t vm, vuint32m4_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vzext_vf2_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint16m4_t vs2, size_t vl) { +vuint32m8_t test_vzext_vf2_u32m8_mu(vbool4_t vm, vuint32m8_t vd, + vuint16m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vzext_vf2_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint32mf2_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf2_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint32mf2_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf2_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint32m1_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf2_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint32m1_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf2_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint32m2_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf2_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint32m2_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf2_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint32m4_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf2_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint32m4_t vs2, size_t vl) { return __riscv_vzext_vf2_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vzext_vf4.c b/auto-generated/policy_funcs/llvm-api-tests/vzext_vf4.c index e70957a4f..98774a713 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vzext_vf4.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vzext_vf4.c @@ -5,15 +5,18 @@ #include -vuint32mf2_t test_vzext_vf4_u32mf2_tu(vuint32mf2_t vd, vuint8mf8_t vs2, size_t vl) { +vuint32mf2_t test_vzext_vf4_u32mf2_tu(vuint32mf2_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vzext_vf4_u32mf2_tu(vd, vs2, vl); } -vuint32m1_t test_vzext_vf4_u32m1_tu(vuint32m1_t vd, vuint8mf4_t vs2, size_t vl) { +vuint32m1_t test_vzext_vf4_u32m1_tu(vuint32m1_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vzext_vf4_u32m1_tu(vd, vs2, vl); } -vuint32m2_t test_vzext_vf4_u32m2_tu(vuint32m2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint32m2_t test_vzext_vf4_u32m2_tu(vuint32m2_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vzext_vf4_u32m2_tu(vd, vs2, vl); } @@ -25,126 +28,157 @@ vuint32m8_t test_vzext_vf4_u32m8_tu(vuint32m8_t vd, vuint8m2_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m8_tu(vd, vs2, vl); } -vuint64m1_t test_vzext_vf4_u64m1_tu(vuint64m1_t vd, vuint16mf4_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf4_u64m1_tu(vuint64m1_t vd, vuint16mf4_t vs2, + size_t vl) { return __riscv_vzext_vf4_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vzext_vf4_u64m2_tu(vuint64m2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf4_u64m2_tu(vuint64m2_t vd, vuint16mf2_t vs2, + size_t vl) { return __riscv_vzext_vf4_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vzext_vf4_u64m4_tu(vuint64m4_t vd, vuint16m1_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf4_u64m4_tu(vuint64m4_t vd, vuint16m1_t vs2, + size_t vl) { return __riscv_vzext_vf4_u64m4_tu(vd, vs2, vl); } -vuint64m8_t test_vzext_vf4_u64m8_tu(vuint64m8_t vd, vuint16m2_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf4_u64m8_tu(vuint64m8_t vd, vuint16m2_t vs2, + size_t vl) { return __riscv_vzext_vf4_u64m8_tu(vd, vs2, vl); } -vuint32mf2_t test_vzext_vf4_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, vuint8mf8_t vs2, size_t vl) { +vuint32mf2_t test_vzext_vf4_u32mf2_tum(vbool64_t vm, vuint32mf2_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf4_u32mf2_tum(vm, vd, vs2, vl); } -vuint32m1_t test_vzext_vf4_u32m1_tum(vbool32_t vm, vuint32m1_t vd, vuint8mf4_t vs2, size_t vl) { +vuint32m1_t test_vzext_vf4_u32m1_tum(vbool32_t vm, vuint32m1_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m1_tum(vm, vd, vs2, vl); } -vuint32m2_t test_vzext_vf4_u32m2_tum(vbool16_t vm, vuint32m2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint32m2_t test_vzext_vf4_u32m2_tum(vbool16_t vm, vuint32m2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m2_tum(vm, vd, vs2, vl); } -vuint32m4_t test_vzext_vf4_u32m4_tum(vbool8_t vm, vuint32m4_t vd, vuint8m1_t vs2, size_t vl) { +vuint32m4_t test_vzext_vf4_u32m4_tum(vbool8_t vm, vuint32m4_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m4_tum(vm, vd, vs2, vl); } -vuint32m8_t test_vzext_vf4_u32m8_tum(vbool4_t vm, vuint32m8_t vd, vuint8m2_t vs2, size_t vl) { +vuint32m8_t test_vzext_vf4_u32m8_tum(vbool4_t vm, vuint32m8_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vzext_vf4_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint16mf4_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf4_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf4_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf4_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf4_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint16m1_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf4_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf4_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint16m2_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf4_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m8_tum(vm, vd, vs2, vl); } -vuint32mf2_t test_vzext_vf4_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, vuint8mf8_t vs2, size_t vl) { +vuint32mf2_t test_vzext_vf4_u32mf2_tumu(vbool64_t vm, vuint32mf2_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf4_u32mf2_tumu(vm, vd, vs2, vl); } -vuint32m1_t test_vzext_vf4_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, vuint8mf4_t vs2, size_t vl) { +vuint32m1_t test_vzext_vf4_u32m1_tumu(vbool32_t vm, vuint32m1_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m1_tumu(vm, vd, vs2, vl); } -vuint32m2_t test_vzext_vf4_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint32m2_t test_vzext_vf4_u32m2_tumu(vbool16_t vm, vuint32m2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m2_tumu(vm, vd, vs2, vl); } -vuint32m4_t test_vzext_vf4_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, vuint8m1_t vs2, size_t vl) { +vuint32m4_t test_vzext_vf4_u32m4_tumu(vbool8_t vm, vuint32m4_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m4_tumu(vm, vd, vs2, vl); } -vuint32m8_t test_vzext_vf4_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, vuint8m2_t vs2, size_t vl) { +vuint32m8_t test_vzext_vf4_u32m8_tumu(vbool4_t vm, vuint32m8_t vd, + vuint8m2_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vzext_vf4_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint16mf4_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf4_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf4_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf4_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf4_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint16m1_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf4_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf4_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint16m2_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf4_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m8_tumu(vm, vd, vs2, vl); } -vuint32mf2_t test_vzext_vf4_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, vuint8mf8_t vs2, size_t vl) { +vuint32mf2_t test_vzext_vf4_u32mf2_mu(vbool64_t vm, vuint32mf2_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf4_u32mf2_mu(vm, vd, vs2, vl); } -vuint32m1_t test_vzext_vf4_u32m1_mu(vbool32_t vm, vuint32m1_t vd, vuint8mf4_t vs2, size_t vl) { +vuint32m1_t test_vzext_vf4_u32m1_mu(vbool32_t vm, vuint32m1_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m1_mu(vm, vd, vs2, vl); } -vuint32m2_t test_vzext_vf4_u32m2_mu(vbool16_t vm, vuint32m2_t vd, vuint8mf2_t vs2, size_t vl) { +vuint32m2_t test_vzext_vf4_u32m2_mu(vbool16_t vm, vuint32m2_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf4_u32m2_mu(vm, vd, vs2, vl); } -vuint32m4_t test_vzext_vf4_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint8m1_t vs2, size_t vl) { +vuint32m4_t test_vzext_vf4_u32m4_mu(vbool8_t vm, vuint32m4_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vzext_vf4_u32m4_mu(vm, vd, vs2, vl); } -vuint32m8_t test_vzext_vf4_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint8m2_t vs2, size_t vl) { +vuint32m8_t test_vzext_vf4_u32m8_mu(vbool4_t vm, vuint32m8_t vd, vuint8m2_t vs2, + size_t vl) { return __riscv_vzext_vf4_u32m8_mu(vm, vd, vs2, vl); } -vuint64m1_t test_vzext_vf4_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint16mf4_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf4_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint16mf4_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf4_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint16mf2_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf4_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint16mf2_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf4_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint16m1_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf4_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint16m1_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf4_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint16m2_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf4_u64m8_mu(vbool8_t vm, vuint64m8_t vd, + vuint16m2_t vs2, size_t vl) { return __riscv_vzext_vf4_u64m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/policy_funcs/llvm-api-tests/vzext_vf8.c b/auto-generated/policy_funcs/llvm-api-tests/vzext_vf8.c index ad06de532..39fdbb2ac 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vzext_vf8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vzext_vf8.c @@ -5,15 +5,18 @@ #include -vuint64m1_t test_vzext_vf8_u64m1_tu(vuint64m1_t vd, vuint8mf8_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf8_u64m1_tu(vuint64m1_t vd, vuint8mf8_t vs2, + size_t vl) { return __riscv_vzext_vf8_u64m1_tu(vd, vs2, vl); } -vuint64m2_t test_vzext_vf8_u64m2_tu(vuint64m2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf8_u64m2_tu(vuint64m2_t vd, vuint8mf4_t vs2, + size_t vl) { return __riscv_vzext_vf8_u64m2_tu(vd, vs2, vl); } -vuint64m4_t test_vzext_vf8_u64m4_tu(vuint64m4_t vd, vuint8mf2_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf8_u64m4_tu(vuint64m4_t vd, vuint8mf2_t vs2, + size_t vl) { return __riscv_vzext_vf8_u64m4_tu(vd, vs2, vl); } @@ -21,50 +24,62 @@ vuint64m8_t test_vzext_vf8_u64m8_tu(vuint64m8_t vd, vuint8m1_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m8_tu(vd, vs2, vl); } -vuint64m1_t test_vzext_vf8_u64m1_tum(vbool64_t vm, vuint64m1_t vd, vuint8mf8_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf8_u64m1_tum(vbool64_t vm, vuint64m1_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m1_tum(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf8_u64m2_tum(vbool32_t vm, vuint64m2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf8_u64m2_tum(vbool32_t vm, vuint64m2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m2_tum(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf8_u64m4_tum(vbool16_t vm, vuint64m4_t vd, vuint8mf2_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf8_u64m4_tum(vbool16_t vm, vuint64m4_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m4_tum(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf8_u64m8_tum(vbool8_t vm, vuint64m8_t vd, vuint8m1_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf8_u64m8_tum(vbool8_t vm, vuint64m8_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m8_tum(vm, vd, vs2, vl); } -vuint64m1_t test_vzext_vf8_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, vuint8mf8_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf8_u64m1_tumu(vbool64_t vm, vuint64m1_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m1_tumu(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf8_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf8_u64m2_tumu(vbool32_t vm, vuint64m2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m2_tumu(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf8_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, vuint8mf2_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf8_u64m4_tumu(vbool16_t vm, vuint64m4_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m4_tumu(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf8_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, vuint8m1_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf8_u64m8_tumu(vbool8_t vm, vuint64m8_t vd, + vuint8m1_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m8_tumu(vm, vd, vs2, vl); } -vuint64m1_t test_vzext_vf8_u64m1_mu(vbool64_t vm, vuint64m1_t vd, vuint8mf8_t vs2, size_t vl) { +vuint64m1_t test_vzext_vf8_u64m1_mu(vbool64_t vm, vuint64m1_t vd, + vuint8mf8_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m1_mu(vm, vd, vs2, vl); } -vuint64m2_t test_vzext_vf8_u64m2_mu(vbool32_t vm, vuint64m2_t vd, vuint8mf4_t vs2, size_t vl) { +vuint64m2_t test_vzext_vf8_u64m2_mu(vbool32_t vm, vuint64m2_t vd, + vuint8mf4_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m2_mu(vm, vd, vs2, vl); } -vuint64m4_t test_vzext_vf8_u64m4_mu(vbool16_t vm, vuint64m4_t vd, vuint8mf2_t vs2, size_t vl) { +vuint64m4_t test_vzext_vf8_u64m4_mu(vbool16_t vm, vuint64m4_t vd, + vuint8mf2_t vs2, size_t vl) { return __riscv_vzext_vf8_u64m4_mu(vm, vd, vs2, vl); } -vuint64m8_t test_vzext_vf8_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint8m1_t vs2, size_t vl) { +vuint64m8_t test_vzext_vf8_u64m8_mu(vbool8_t vm, vuint64m8_t vd, vuint8m1_t vs2, + size_t vl) { return __riscv_vzext_vf8_u64m8_mu(vm, vd, vs2, vl); } From 207ac85b7d07df8bf268ee78aa2bb26a350f743f Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Fri, 2 Aug 2024 06:08:17 -0700 Subject: [PATCH 103/151] [Auto-gen] Update bfloat16 tests under ../auto-generated. (make git-commit-autogen-bf16-test) --- .../policy_funcs/llvm-api-tests/vfncvtbf16.c | 191 ++++++--- .../policy_funcs/llvm-api-tests/vfwcvtbf16.c | 60 ++- .../policy_funcs/llvm-api-tests/vfwmaccbf16.c | 394 +++++++++++++----- .../policy_funcs/llvm-api-tests/vle16.c | 72 ++-- .../policy_funcs/llvm-api-tests/vle16ff.c | 90 ++-- .../policy_funcs/llvm-api-tests/vloxei16.c | 90 ++-- .../llvm-api-tests/vloxseg2ei16.c | 97 ++++- .../llvm-api-tests/vloxseg3ei16.c | 79 +++- .../llvm-api-tests/vloxseg4ei16.c | 79 +++- .../llvm-api-tests/vloxseg5ei16.c | 61 ++- .../llvm-api-tests/vloxseg6ei16.c | 61 ++- .../llvm-api-tests/vloxseg7ei16.c | 61 ++- .../llvm-api-tests/vloxseg8ei16.c | 61 ++- .../policy_funcs/llvm-api-tests/vlse16.c | 90 ++-- .../policy_funcs/llvm-api-tests/vlseg2e16.c | 66 ++- .../policy_funcs/llvm-api-tests/vlseg2e16ff.c | 90 +++- .../policy_funcs/llvm-api-tests/vlseg3e16.c | 54 ++- .../policy_funcs/llvm-api-tests/vlseg3e16ff.c | 73 +++- .../policy_funcs/llvm-api-tests/vlseg4e16.c | 54 ++- .../policy_funcs/llvm-api-tests/vlseg4e16ff.c | 73 +++- .../policy_funcs/llvm-api-tests/vlseg5e16.c | 42 +- .../policy_funcs/llvm-api-tests/vlseg5e16ff.c | 56 ++- .../policy_funcs/llvm-api-tests/vlseg6e16.c | 42 +- .../policy_funcs/llvm-api-tests/vlseg6e16ff.c | 56 ++- .../policy_funcs/llvm-api-tests/vlseg7e16.c | 42 +- .../policy_funcs/llvm-api-tests/vlseg7e16ff.c | 56 ++- .../policy_funcs/llvm-api-tests/vlseg8e16.c | 42 +- .../policy_funcs/llvm-api-tests/vlseg8e16ff.c | 56 ++- .../policy_funcs/llvm-api-tests/vlsseg2e16.c | 87 +++- .../policy_funcs/llvm-api-tests/vlsseg3e16.c | 71 +++- .../policy_funcs/llvm-api-tests/vlsseg4e16.c | 71 +++- .../policy_funcs/llvm-api-tests/vlsseg5e16.c | 55 ++- .../policy_funcs/llvm-api-tests/vlsseg6e16.c | 55 ++- .../policy_funcs/llvm-api-tests/vlsseg7e16.c | 55 ++- .../policy_funcs/llvm-api-tests/vlsseg8e16.c | 55 ++- .../policy_funcs/llvm-api-tests/vluxei16.c | 90 ++-- .../llvm-api-tests/vluxseg2ei16.c | 97 ++++- .../llvm-api-tests/vluxseg3ei16.c | 79 +++- .../llvm-api-tests/vluxseg4ei16.c | 79 +++- .../llvm-api-tests/vluxseg5ei16.c | 61 ++- .../llvm-api-tests/vluxseg6ei16.c | 61 ++- .../llvm-api-tests/vluxseg7ei16.c | 61 ++- .../llvm-api-tests/vluxseg8ei16.c | 61 ++- 43 files changed, 2521 insertions(+), 805 deletions(-) diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c index 333494a48..ea509f839 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c @@ -7,162 +7,243 @@ #include -vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tu(vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf4_tu(vd, vs2, vl); } -vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tu(vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf2_tu(vd, vs2, vl); } -vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tu(vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m1_tu(vd, vs2, vl); } -vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tu(vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m2_tu(vd, vs2, vl); } -vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tu(vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m4_tu(vd, vs2, vl); } -vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf4_tum(vm, vd, vs2, vl); } -vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf2_tum(vm, vd, vs2, vl); } -vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m1_tum(vm, vd, vs2, vl); } -vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m2_tum(vm, vd, vs2, vl); } -vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m4_tum(vm, vd, vs2, vl); } -vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf4_tumu(vm, vd, vs2, vl); } -vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf2_tumu(vm, vd, vs2, vl); } -vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m1_tumu(vm, vd, vs2, vl); } -vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m2_tumu(vm, vd, vs2, vl); } -vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m4_tumu(vm, vd, vs2, vl); } -vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + vfloat32mf2_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf4_mu(vm, vd, vs2, vl); } -vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + vfloat32m1_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf2_mu(vm, vd, vs2, vl); } -vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m1_mu(vm, vd, vs2, vl); } -vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m2_mu(vm, vd, vs2, vl); } -vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m4_mu(vm, vd, vs2, vl); } -vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tu(vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tu(vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tu(vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tu(vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tu(vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tu(vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tu(vd, vs2, __RISCV_FRM_RNE, vl); } -vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tum(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tum(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tum(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tum(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tum(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tum(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tum(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vbool16_t vm, + vbfloat16m1_t vd, + vfloat32m2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vbool8_t vm, + vbfloat16m2_t vd, + vfloat32m4_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vbool4_t vm, + vbfloat16m4_t vd, + vfloat32m8_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_tumu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vbool64_t vm, vbfloat16mf4_t vd, vfloat32mf2_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16mf4_t test_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vbool64_t vm, + vbfloat16mf4_t vd, + vfloat32mf2_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vbool32_t vm, vbfloat16mf2_t vd, vfloat32m1_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16mf2_t test_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vbool32_t vm, + vbfloat16mf2_t vd, + vfloat32m1_t vs2, + size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16mf2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_mu(vbool16_t vm, vbfloat16m1_t vd, vfloat32m2_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m1_t test_vfncvtbf16_f_f_w_bf16m1_rm_mu(vbool16_t vm, vbfloat16m1_t vd, + vfloat32m2_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m1_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_mu(vbool8_t vm, vbfloat16m2_t vd, vfloat32m4_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m2_t test_vfncvtbf16_f_f_w_bf16m2_rm_mu(vbool8_t vm, vbfloat16m2_t vd, + vfloat32m4_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m2_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } -vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_mu(vbool4_t vm, vbfloat16m4_t vd, vfloat32m8_t vs2, size_t vl) { - return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, vl); +vbfloat16m4_t test_vfncvtbf16_f_f_w_bf16m4_rm_mu(vbool4_t vm, vbfloat16m4_t vd, + vfloat32m8_t vs2, size_t vl) { + return __riscv_vfncvtbf16_f_f_w_bf16m4_rm_mu(vm, vd, vs2, __RISCV_FRM_RNE, + vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c index 0e78ae270..31f4e80c7 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c @@ -7,82 +7,102 @@ #include -vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32mf2_tu(vd, vs2, vl); } -vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m1_tu(vd, vs2, vl); } -vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs2, + size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m2_tu(vd, vs2, vl); } -vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs2, + size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m4_tu(vd, vs2, vl); } -vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs2, + size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m8_tu(vd, vs2, vl); } -vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32mf2_tum(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m1_tum(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m2_tum(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m4_tum(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m8_tum(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32mf2_tumu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m1_tumu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m2_tumu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m4_tumu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m8_tumu(vm, vd, vs2, vl); } -vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwcvtbf16_f_f_v_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32mf2_mu(vm, vd, vs2, vl); } -vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwcvtbf16_f_f_v_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m1_mu(vm, vd, vs2, vl); } -vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwcvtbf16_f_f_v_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m2_mu(vm, vd, vs2, vl); } -vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwcvtbf16_f_f_v_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m4_mu(vm, vd, vs2, vl); } -vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwcvtbf16_f_f_v_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs2, size_t vl) { return __riscv_vfwcvtbf16_f_f_v_f32m8_mu(vm, vd, vs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c index 22817f534..c031bb190 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c @@ -7,322 +7,496 @@ #include -vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tu(vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32mf2_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tu(vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32m1_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tu(vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32m2_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tu(vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32m4_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tu(vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32m8_tu(vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32mf2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m1_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vv_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m2_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vv_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m4_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vv_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m8_tum(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32mf2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m1_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m2_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m4_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m8_tumu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32mf2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmaccbf16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m1_t test_vfwmaccbf16_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m1_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmaccbf16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, vbfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vv_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m2_t test_vfwmaccbf16_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m2_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmaccbf16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, vbfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vv_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m4_t test_vfwmaccbf16_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m4_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmaccbf16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, vbfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vv_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32m8_t test_vfwmaccbf16_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { return __riscv_vfwmaccbf16_vf_f32m8_mu(vm, vd, vs1, vs2, vl); } -vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tu(vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tu(vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tu(vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tu(vfloat32mf2_t vd, __bf16 vs1, + vbfloat16mf4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32mf2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tu(vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tu(vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tu(vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tu(vfloat32m1_t vd, __bf16 vs1, + vbfloat16mf2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32m1_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tu(vfloat32m2_t vd, vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tu(vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tu(vfloat32m2_t vd, __bf16 vs1, + vbfloat16m1_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32m2_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tu(vfloat32m4_t vd, vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tu(vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tu(vfloat32m4_t vd, __bf16 vs1, + vbfloat16m2_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32m4_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tu(vfloat32m8_t vd, vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vv_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tu(vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tu(vfloat32m8_t vd, __bf16 vs1, + vbfloat16m4_t vs2, size_t vl) { return __riscv_vfwmaccbf16_vf_f32m8_rm_tu(vd, vs1, vs2, __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tum(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tum(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tum(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tum(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tum(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_tum(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_tumu(vm, vd, vs1, vs2, + __RISCV_FRM_RNE, vl); } -vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_tumu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_tumu(vm, vd, vs1, vs2, + __RISCV_FRM_RNE, vl); } -vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_tumu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_tumu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_tumu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_tumu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_tumu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, vbfloat16mf4_t vs1, vbfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmaccbf16_vv_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + vbfloat16mf4_t vs1, + vbfloat16mf4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, __bf16 vs1, vbfloat16mf4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32mf2_t test_vfwmaccbf16_vf_f32mf2_rm_mu(vbool64_t vm, vfloat32mf2_t vd, + __bf16 vs1, vbfloat16mf4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32mf2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, vbfloat16mf2_t vs1, vbfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmaccbf16_vv_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + vbfloat16mf2_t vs1, + vbfloat16mf2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, __bf16 vs1, vbfloat16mf2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m1_t test_vfwmaccbf16_vf_f32m1_rm_mu(vbool32_t vm, vfloat32m1_t vd, + __bf16 vs1, vbfloat16mf2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m1_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, vbfloat16m1_t vs1, vbfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmaccbf16_vv_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + vbfloat16m1_t vs1, + vbfloat16m1_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, __bf16 vs1, vbfloat16m1_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m2_t test_vfwmaccbf16_vf_f32m2_rm_mu(vbool16_t vm, vfloat32m2_t vd, + __bf16 vs1, vbfloat16m1_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m2_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, vbfloat16m2_t vs1, vbfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmaccbf16_vv_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + vbfloat16m2_t vs1, + vbfloat16m2_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, __bf16 vs1, vbfloat16m2_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m4_t test_vfwmaccbf16_vf_f32m4_rm_mu(vbool8_t vm, vfloat32m4_t vd, + __bf16 vs1, vbfloat16m2_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m4_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, vbfloat16m4_t vs1, vbfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmaccbf16_vv_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + vbfloat16m4_t vs1, + vbfloat16m4_t vs2, size_t vl) { + return __riscv_vfwmaccbf16_vv_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } -vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, size_t vl) { - return __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, vl); +vfloat32m8_t test_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, + __bf16 vs1, vbfloat16m4_t vs2, + size_t vl) { + return __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vm, vd, vs1, vs2, __RISCV_FRM_RNE, + vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c index 274053f2f..9ad0dd194 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c @@ -7,98 +7,122 @@ #include -vbfloat16mf4_t test_vle16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4_t test_vle16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t vl) { return __riscv_vle16_v_bf16mf4_tu(vd, rs1, vl); } -vbfloat16mf2_t test_vle16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2_t test_vle16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t vl) { return __riscv_vle16_v_bf16mf2_tu(vd, rs1, vl); } -vbfloat16m1_t test_vle16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1_t test_vle16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t vl) { return __riscv_vle16_v_bf16m1_tu(vd, rs1, vl); } -vbfloat16m2_t test_vle16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2_t test_vle16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t vl) { return __riscv_vle16_v_bf16m2_tu(vd, rs1, vl); } -vbfloat16m4_t test_vle16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m4_t test_vle16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t vl) { return __riscv_vle16_v_bf16m4_tu(vd, rs1, vl); } -vbfloat16m8_t test_vle16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m8_t test_vle16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t vl) { return __riscv_vle16_v_bf16m8_tu(vd, rs1, vl); } -vbfloat16mf4_t test_vle16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4_t test_vle16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16mf4_tum(vm, vd, rs1, vl); } -vbfloat16mf2_t test_vle16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2_t test_vle16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16mf2_tum(vm, vd, rs1, vl); } -vbfloat16m1_t test_vle16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1_t test_vle16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m1_tum(vm, vd, rs1, vl); } -vbfloat16m2_t test_vle16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2_t test_vle16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m2_tum(vm, vd, rs1, vl); } -vbfloat16m4_t test_vle16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m4_t test_vle16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m4_tum(vm, vd, rs1, vl); } -vbfloat16m8_t test_vle16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m8_t test_vle16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m8_tum(vm, vd, rs1, vl); } -vbfloat16mf4_t test_vle16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4_t test_vle16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16mf4_tumu(vm, vd, rs1, vl); } -vbfloat16mf2_t test_vle16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2_t test_vle16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16mf2_tumu(vm, vd, rs1, vl); } -vbfloat16m1_t test_vle16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1_t test_vle16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m1_tumu(vm, vd, rs1, vl); } -vbfloat16m2_t test_vle16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2_t test_vle16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m2_tumu(vm, vd, rs1, vl); } -vbfloat16m4_t test_vle16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m4_t test_vle16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m4_tumu(vm, vd, rs1, vl); } -vbfloat16m8_t test_vle16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m8_t test_vle16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m8_tumu(vm, vd, rs1, vl); } -vbfloat16mf4_t test_vle16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4_t test_vle16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16mf4_mu(vm, vd, rs1, vl); } -vbfloat16mf2_t test_vle16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2_t test_vle16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16mf2_mu(vm, vd, rs1, vl); } -vbfloat16m1_t test_vle16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1_t test_vle16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m1_mu(vm, vd, rs1, vl); } -vbfloat16m2_t test_vle16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2_t test_vle16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m2_mu(vm, vd, rs1, vl); } -vbfloat16m4_t test_vle16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m4_t test_vle16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m4_mu(vm, vd, rs1, vl); } -vbfloat16m8_t test_vle16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m8_t test_vle16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vle16_v_bf16m8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c index ab26e32cb..f1f7aa8ca 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c @@ -7,98 +7,140 @@ #include -vbfloat16mf4_t test_vle16ff_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_bf16mf4_tu(vd, rs1, new_vl, vl); } -vbfloat16mf2_t test_vle16ff_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_bf16mf2_tu(vd, rs1, new_vl, vl); } -vbfloat16m1_t test_vle16ff_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1_t test_vle16ff_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_bf16m1_tu(vd, rs1, new_vl, vl); } -vbfloat16m2_t test_vle16ff_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2_t test_vle16ff_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_bf16m2_tu(vd, rs1, new_vl, vl); } -vbfloat16m4_t test_vle16ff_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m4_t test_vle16ff_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_bf16m4_tu(vd, rs1, new_vl, vl); } -vbfloat16m8_t test_vle16ff_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m8_t test_vle16ff_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vle16ff_v_bf16m8_tu(vd, rs1, new_vl, vl); } -vbfloat16mf4_t test_vle16ff_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16mf4_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2_t test_vle16ff_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16mf2_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m1_t test_vle16ff_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1_t test_vle16ff_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m1_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m2_t test_vle16ff_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2_t test_vle16ff_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m2_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m4_t test_vle16ff_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m4_t test_vle16ff_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m4_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m8_t test_vle16ff_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m8_t test_vle16ff_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m8_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4_t test_vle16ff_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4_t test_vle16ff_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16mf4_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2_t test_vle16ff_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2_t test_vle16ff_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16mf2_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1_t test_vle16ff_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1_t test_vle16ff_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m1_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m2_t test_vle16ff_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2_t test_vle16ff_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m2_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m4_t test_vle16ff_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m4_t test_vle16ff_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m4_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m8_t test_vle16ff_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m8_t test_vle16ff_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m8_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4_t test_vle16ff_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4_t test_vle16ff_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16mf4_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2_t test_vle16ff_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2_t test_vle16ff_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16mf2_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1_t test_vle16ff_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1_t test_vle16ff_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m1_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m2_t test_vle16ff_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2_t test_vle16ff_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m2_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m4_t test_vle16ff_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m4_t test_vle16ff_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m4_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m8_t test_vle16ff_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m8_t test_vle16ff_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, size_t *new_vl, + size_t vl) { return __riscv_vle16ff_v_bf16m8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c index e25fba5e2..9562f56ed 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c @@ -7,98 +7,140 @@ #include -vbfloat16mf4_t test_vloxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxei16_v_bf16mf4_tu(vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vloxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxei16_v_bf16mf2_tu(vd, rs1, rs2, vl); } -vbfloat16m1_t test_vloxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1_t test_vloxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxei16_v_bf16m1_tu(vd, rs1, rs2, vl); } -vbfloat16m2_t test_vloxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2_t test_vloxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxei16_v_bf16m2_tu(vd, rs1, rs2, vl); } -vbfloat16m4_t test_vloxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4_t test_vloxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxei16_v_bf16m4_tu(vd, rs1, rs2, vl); } -vbfloat16m8_t test_vloxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { +vbfloat16m8_t test_vloxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vloxei16_v_bf16m8_tu(vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vloxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vloxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vloxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1_t test_vloxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vloxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2_t test_vloxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vloxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4_t test_vloxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vloxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { +vbfloat16m8_t test_vloxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vloxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4_t test_vloxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vloxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2_t test_vloxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vloxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1_t test_vloxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vloxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2_t test_vloxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vloxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4_t test_vloxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vloxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { +vbfloat16m8_t test_vloxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vloxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4_t test_vloxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vloxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2_t test_vloxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vloxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1_t test_vloxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vloxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2_t test_vloxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vloxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4_t test_vloxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vloxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { +vbfloat16m8_t test_vloxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vloxei16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c index 0d70f67fa..f101015d0 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c @@ -7,82 +7,139 @@ #include -vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m1x2_tu(vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m2x2_tu(vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m4x2_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg2ei16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vloxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vloxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x2_t test_vloxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x2_t test_vloxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4x2_t test_vloxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vloxseg2ei16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c index d91d6f43c..ede85340d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c @@ -7,66 +7,113 @@ #include -vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16m1x3_tu(vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16m2x3_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg3ei16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vloxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vloxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x3_t test_vloxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x3_t test_vloxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg3ei16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c index 6cf74ee4a..269e0443c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c @@ -7,66 +7,113 @@ #include -vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16m1x4_tu(vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16m2x4_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg4ei16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vloxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vloxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x4_t test_vloxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x4_t test_vloxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vloxseg4ei16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c index 01eadde37..779ec3de4 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c @@ -7,50 +7,87 @@ #include -vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_bf16m1x5_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg5ei16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vloxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vloxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x5_t test_vloxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg5ei16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c index 320ae3aef..a07a088ed 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c @@ -7,50 +7,87 @@ #include -vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_bf16m1x6_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg6ei16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vloxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vloxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x6_t test_vloxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg6ei16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c index 385d8e2ce..7580f6f81 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c @@ -7,50 +7,87 @@ #include -vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_bf16m1x7_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg7ei16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vloxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vloxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x7_t test_vloxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg7ei16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c index 6e6d31469..d018c0492 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c @@ -7,50 +7,87 @@ #include -vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_bf16m1x8_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vloxseg8ei16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vloxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vloxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x8_t test_vloxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vloxseg8ei16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c index 07c9b1b7f..2a80ce137 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c @@ -7,98 +7,140 @@ #include -vbfloat16mf4_t test_vlse16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4_t test_vlse16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_bf16mf4_tu(vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vlse16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2_t test_vlse16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_bf16mf2_tu(vd, rs1, rs2, vl); } -vbfloat16m1_t test_vlse16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1_t test_vlse16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_bf16m1_tu(vd, rs1, rs2, vl); } -vbfloat16m2_t test_vlse16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2_t test_vlse16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_bf16m2_tu(vd, rs1, rs2, vl); } -vbfloat16m4_t test_vlse16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m4_t test_vlse16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_bf16m4_tu(vd, rs1, rs2, vl); } -vbfloat16m8_t test_vlse16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m8_t test_vlse16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlse16_v_bf16m8_tu(vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vlse16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4_t test_vlse16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vlse16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2_t test_vlse16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vlse16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1_t test_vlse16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vlse16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2_t test_vlse16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vlse16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m4_t test_vlse16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vlse16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m8_t test_vlse16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vlse16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4_t test_vlse16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vlse16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2_t test_vlse16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vlse16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1_t test_vlse16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vlse16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2_t test_vlse16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vlse16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m4_t test_vlse16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vlse16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m8_t test_vlse16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vlse16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4_t test_vlse16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vlse16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2_t test_vlse16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vlse16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1_t test_vlse16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vlse16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2_t test_vlse16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vlse16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m4_t test_vlse16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vlse16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m8_t test_vlse16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlse16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c index 283ae4fa4..53c98a351 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c @@ -7,82 +7,108 @@ #include -vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16mf4x2_tu(vd, rs1, vl); } -vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16mf2x2_tu(vd, rs1, vl); } -vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m1x2_tu(vd, rs1, vl); } -vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m2x2_tu(vd, rs1, vl); } -vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m4x2_tu(vd, rs1, vl); } -vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16mf4x2_tum(vm, vd, rs1, vl); } -vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16mf2x2_tum(vm, vd, rs1, vl); } -vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m1x2_tum(vm, vd, rs1, vl); } -vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m2x2_tum(vm, vd, rs1, vl); } -vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m4x2_tum(vm, vd, rs1, vl); } -vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16mf4x2_tumu(vm, vd, rs1, vl); } -vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16mf2x2_tumu(vm, vd, rs1, vl); } -vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m1x2_tumu(vm, vd, rs1, vl); } -vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m2x2_tumu(vm, vd, rs1, vl); } -vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m4x2_tumu(vm, vd, rs1, vl); } -vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x2_t test_vlseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16mf4x2_mu(vm, vd, rs1, vl); } -vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x2_t test_vlseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16mf2x2_mu(vm, vd, rs1, vl); } -vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x2_t test_vlseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m1x2_mu(vm, vd, rs1, vl); } -vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x2_t test_vlseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m2x2_mu(vm, vd, rs1, vl); } -vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m4x2_t test_vlseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg2e16_v_bf16m4x2_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c index 5a1b7eb43..691547fb2 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c @@ -7,82 +7,132 @@ #include -vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16mf4x2_tu(vd, rs1, new_vl, vl); } -vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16mf2x2_tu(vd, rs1, new_vl, vl); } -vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m1x2_tu(vd, rs1, new_vl, vl); } -vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m2x2_tu(vd, rs1, new_vl, vl); } -vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m4x2_tu(vd, rs1, new_vl, vl); } -vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16mf4x2_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16mf2x2_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m1x2_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m2x2_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m4x2_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16mf4x2_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16mf2x2_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m1x2_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m2x2_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m4x2_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x2_t test_vlseg2e16ff_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16mf4x2_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x2_t test_vlseg2e16ff_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16mf2x2_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x2_t test_vlseg2e16ff_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m1x2_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x2_t test_vlseg2e16ff_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m2x2_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m4x2_t test_vlseg2e16ff_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg2e16ff_v_bf16m4x2_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c index 9d1466934..319bb5951 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c @@ -7,66 +7,88 @@ #include -vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16mf4x3_tu(vd, rs1, vl); } -vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16mf2x3_tu(vd, rs1, vl); } -vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16m1x3_tu(vd, rs1, vl); } -vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16m2x3_tu(vd, rs1, vl); } -vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16mf4x3_tum(vm, vd, rs1, vl); } -vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16mf2x3_tum(vm, vd, rs1, vl); } -vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16m1x3_tum(vm, vd, rs1, vl); } -vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16m2x3_tum(vm, vd, rs1, vl); } -vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16mf4x3_tumu(vm, vd, rs1, vl); } -vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16mf2x3_tumu(vm, vd, rs1, vl); } -vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16m1x3_tumu(vm, vd, rs1, vl); } -vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16m2x3_tumu(vm, vd, rs1, vl); } -vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x3_t test_vlseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16mf4x3_mu(vm, vd, rs1, vl); } -vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x3_t test_vlseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16mf2x3_mu(vm, vd, rs1, vl); } -vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x3_t test_vlseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16m1x3_mu(vm, vd, rs1, vl); } -vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x3_t test_vlseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg3e16_v_bf16m2x3_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c index e8700e9c3..a204d3cc3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c @@ -7,66 +7,107 @@ #include -vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16mf4x3_tu(vd, rs1, new_vl, vl); } -vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16mf2x3_tu(vd, rs1, new_vl, vl); } -vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16m1x3_tu(vd, rs1, new_vl, vl); } -vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16m2x3_tu(vd, rs1, new_vl, vl); } -vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16mf4x3_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16mf2x3_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16m1x3_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16m2x3_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16mf4x3_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16mf2x3_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16m1x3_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16m2x3_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x3_t test_vlseg3e16ff_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16mf4x3_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x3_t test_vlseg3e16ff_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16mf2x3_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x3_t test_vlseg3e16ff_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16m1x3_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x3_t test_vlseg3e16ff_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg3e16ff_v_bf16m2x3_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c index b2e34cce5..d0e04a9b1 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c @@ -7,66 +7,88 @@ #include -vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16mf4x4_tu(vd, rs1, vl); } -vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16mf2x4_tu(vd, rs1, vl); } -vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16m1x4_tu(vd, rs1, vl); } -vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16m2x4_tu(vd, rs1, vl); } -vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16mf4x4_tum(vm, vd, rs1, vl); } -vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16mf2x4_tum(vm, vd, rs1, vl); } -vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16m1x4_tum(vm, vd, rs1, vl); } -vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16m2x4_tum(vm, vd, rs1, vl); } -vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16mf4x4_tumu(vm, vd, rs1, vl); } -vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16mf2x4_tumu(vm, vd, rs1, vl); } -vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16m1x4_tumu(vm, vd, rs1, vl); } -vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16m2x4_tumu(vm, vd, rs1, vl); } -vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x4_t test_vlseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16mf4x4_mu(vm, vd, rs1, vl); } -vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x4_t test_vlseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16mf2x4_mu(vm, vd, rs1, vl); } -vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x4_t test_vlseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16m1x4_mu(vm, vd, rs1, vl); } -vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m2x4_t test_vlseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg4e16_v_bf16m2x4_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c index 0dedf996f..b33286ebf 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c @@ -7,66 +7,107 @@ #include -vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16mf4x4_tu(vd, rs1, new_vl, vl); } -vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16mf2x4_tu(vd, rs1, new_vl, vl); } -vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16m1x4_tu(vd, rs1, new_vl, vl); } -vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16m2x4_tu(vd, rs1, new_vl, vl); } -vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16mf4x4_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16mf2x4_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16m1x4_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16m2x4_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16mf4x4_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16mf2x4_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16m1x4_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16m2x4_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x4_t test_vlseg4e16ff_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16mf4x4_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x4_t test_vlseg4e16ff_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16mf2x4_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x4_t test_vlseg4e16ff_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16m1x4_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m2x4_t test_vlseg4e16ff_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg4e16ff_v_bf16m2x4_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c index 1db782076..38d02f7df 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c @@ -7,50 +7,68 @@ #include -vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16mf4x5_tu(vd, rs1, vl); } -vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16mf2x5_tu(vd, rs1, vl); } -vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16m1x5_tu(vd, rs1, vl); } -vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16mf4x5_tum(vm, vd, rs1, vl); } -vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16mf2x5_tum(vm, vd, rs1, vl); } -vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16m1x5_tum(vm, vd, rs1, vl); } -vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16mf4x5_tumu(vm, vd, rs1, vl); } -vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16mf2x5_tumu(vm, vd, rs1, vl); } -vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16m1x5_tumu(vm, vd, rs1, vl); } -vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x5_t test_vlseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16mf4x5_mu(vm, vd, rs1, vl); } -vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x5_t test_vlseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16mf2x5_mu(vm, vd, rs1, vl); } -vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x5_t test_vlseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg5e16_v_bf16m1x5_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c index 23adbd9ad..c8d063a68 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c @@ -7,50 +7,82 @@ #include -vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16mf4x5_tu(vd, rs1, new_vl, vl); } -vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16mf2x5_tu(vd, rs1, new_vl, vl); } -vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16m1x5_tu(vd, rs1, new_vl, vl); } -vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16mf4x5_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16mf2x5_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16m1x5_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16mf4x5_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16mf2x5_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16m1x5_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x5_t test_vlseg5e16ff_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16mf4x5_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x5_t test_vlseg5e16ff_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16mf2x5_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x5_t test_vlseg5e16ff_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg5e16ff_v_bf16m1x5_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c index 9e3ea7100..1c251e404 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c @@ -7,50 +7,68 @@ #include -vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16mf4x6_tu(vd, rs1, vl); } -vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16mf2x6_tu(vd, rs1, vl); } -vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16m1x6_tu(vd, rs1, vl); } -vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16mf4x6_tum(vm, vd, rs1, vl); } -vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16mf2x6_tum(vm, vd, rs1, vl); } -vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16m1x6_tum(vm, vd, rs1, vl); } -vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16mf4x6_tumu(vm, vd, rs1, vl); } -vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16mf2x6_tumu(vm, vd, rs1, vl); } -vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16m1x6_tumu(vm, vd, rs1, vl); } -vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x6_t test_vlseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16mf4x6_mu(vm, vd, rs1, vl); } -vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x6_t test_vlseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16mf2x6_mu(vm, vd, rs1, vl); } -vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x6_t test_vlseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg6e16_v_bf16m1x6_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c index 61863d533..be16241b2 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c @@ -7,50 +7,82 @@ #include -vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16mf4x6_tu(vd, rs1, new_vl, vl); } -vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16mf2x6_tu(vd, rs1, new_vl, vl); } -vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16m1x6_tu(vd, rs1, new_vl, vl); } -vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16mf4x6_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16mf2x6_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16m1x6_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16mf4x6_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16mf2x6_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16m1x6_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x6_t test_vlseg6e16ff_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16mf4x6_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x6_t test_vlseg6e16ff_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16mf2x6_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x6_t test_vlseg6e16ff_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg6e16ff_v_bf16m1x6_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c index ad74111ed..900b6b734 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c @@ -7,50 +7,68 @@ #include -vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16mf4x7_tu(vd, rs1, vl); } -vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16mf2x7_tu(vd, rs1, vl); } -vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16m1x7_tu(vd, rs1, vl); } -vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16mf4x7_tum(vm, vd, rs1, vl); } -vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16mf2x7_tum(vm, vd, rs1, vl); } -vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16m1x7_tum(vm, vd, rs1, vl); } -vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16mf4x7_tumu(vm, vd, rs1, vl); } -vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16mf2x7_tumu(vm, vd, rs1, vl); } -vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16m1x7_tumu(vm, vd, rs1, vl); } -vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x7_t test_vlseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16mf4x7_mu(vm, vd, rs1, vl); } -vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x7_t test_vlseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16mf2x7_mu(vm, vd, rs1, vl); } -vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x7_t test_vlseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg7e16_v_bf16m1x7_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c index dad750088..c478a9b09 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c @@ -7,50 +7,82 @@ #include -vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16mf4x7_tu(vd, rs1, new_vl, vl); } -vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16mf2x7_tu(vd, rs1, new_vl, vl); } -vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16m1x7_tu(vd, rs1, new_vl, vl); } -vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16mf4x7_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16mf2x7_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16m1x7_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16mf4x7_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16mf2x7_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16m1x7_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x7_t test_vlseg7e16ff_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16mf4x7_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x7_t test_vlseg7e16ff_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16mf2x7_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x7_t test_vlseg7e16ff_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg7e16ff_v_bf16m1x7_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c index b79d35e72..d2d7bc638 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c @@ -7,50 +7,68 @@ #include -vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16mf4x8_tu(vd, rs1, vl); } -vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16mf2x8_tu(vd, rs1, vl); } -vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16m1x8_tu(vd, rs1, vl); } -vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16mf4x8_tum(vm, vd, rs1, vl); } -vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16mf2x8_tum(vm, vd, rs1, vl); } -vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16m1x8_tum(vm, vd, rs1, vl); } -vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16mf4x8_tumu(vm, vd, rs1, vl); } -vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16mf2x8_tumu(vm, vd, rs1, vl); } -vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16m1x8_tumu(vm, vd, rs1, vl); } -vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf4x8_t test_vlseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16mf4x8_mu(vm, vd, rs1, vl); } -vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16mf2x8_t test_vlseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16mf2x8_mu(vm, vd, rs1, vl); } -vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t vl) { +vbfloat16m1x8_t test_vlseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, size_t vl) { return __riscv_vlseg8e16_v_bf16m1x8_mu(vm, vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c index 3843bae0e..d03a08b80 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c @@ -7,50 +7,82 @@ #include -vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16mf4x8_tu(vd, rs1, new_vl, vl); } -vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16mf2x8_tu(vd, rs1, new_vl, vl); } -vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16m1x8_tu(vd, rs1, new_vl, vl); } -vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16mf4x8_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16mf2x8_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16m1x8_tum(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16mf4x8_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16mf2x8_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16m1x8_tumu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf4x8_t test_vlseg8e16ff_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16mf4x8_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16mf2x8_t test_vlseg8e16ff_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16mf2x8_mu(vm, vd, rs1, new_vl, vl); } -vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, size_t *new_vl, size_t vl) { +vbfloat16m1x8_t test_vlseg8e16ff_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, + size_t *new_vl, size_t vl) { return __riscv_vlseg8e16ff_v_bf16m1x8_mu(vm, vd, rs1, new_vl, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c index 5ab99d3a8..e0a2ecf37 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c @@ -7,82 +7,129 @@ #include -vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m1x2_tu(vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m2x2_tu(vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m4x2_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vlsseg2e16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vlsseg2e16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg2e16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x2_t test_vlsseg2e16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x2_t test_vlsseg2e16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m4x2_t test_vlsseg2e16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg2e16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c index dc2616626..16db3c084 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c @@ -7,66 +7,105 @@ #include -vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_bf16m1x3_tu(vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_bf16m2x3_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vlsseg3e16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vlsseg3e16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg3e16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x3_t test_vlsseg3e16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x3_t test_vlsseg3e16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg3e16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c index 0cfb4ccc4..f6bea1a7c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c @@ -7,66 +7,105 @@ #include -vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_bf16m1x4_tu(vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_bf16m2x4_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vlsseg4e16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vlsseg4e16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg4e16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x4_t test_vlsseg4e16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m2x4_t test_vlsseg4e16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg4e16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c index 0f8127fc6..0a5c27341 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c @@ -7,50 +7,81 @@ #include -vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_bf16m1x5_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vlsseg5e16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vlsseg5e16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg5e16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x5_t test_vlsseg5e16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg5e16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c index 90cbf219e..d6c2c7dfe 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c @@ -7,50 +7,81 @@ #include -vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_bf16m1x6_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vlsseg6e16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vlsseg6e16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg6e16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x6_t test_vlsseg6e16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg6e16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c index de961527d..de18ed2a5 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c @@ -7,50 +7,81 @@ #include -vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_bf16m1x7_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vlsseg7e16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vlsseg7e16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg7e16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x7_t test_vlsseg7e16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg7e16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c index 85aa5df54..fb6f0c128 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c @@ -7,50 +7,81 @@ #include -vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_bf16m1x8_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vlsseg8e16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vlsseg8e16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + ptrdiff_t rs2, size_t vl) { return __riscv_vlsseg8e16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, ptrdiff_t rs2, size_t vl) { +vbfloat16m1x8_t test_vlsseg8e16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, + const __bf16 *rs1, ptrdiff_t rs2, + size_t vl) { return __riscv_vlsseg8e16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c index 53f04185c..d51db1c09 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c @@ -7,98 +7,140 @@ #include -vbfloat16mf4_t test_vluxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tu(vbfloat16mf4_t vd, const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxei16_v_bf16mf4_tu(vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vluxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tu(vbfloat16mf2_t vd, const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxei16_v_bf16mf2_tu(vd, rs1, rs2, vl); } -vbfloat16m1_t test_vluxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1_t test_vluxei16_v_bf16m1_tu(vbfloat16m1_t vd, const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxei16_v_bf16m1_tu(vd, rs1, rs2, vl); } -vbfloat16m2_t test_vluxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2_t test_vluxei16_v_bf16m2_tu(vbfloat16m2_t vd, const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxei16_v_bf16m2_tu(vd, rs1, rs2, vl); } -vbfloat16m4_t test_vluxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4_t test_vluxei16_v_bf16m4_tu(vbfloat16m4_t vd, const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxei16_v_bf16m4_tu(vd, rs1, rs2, vl); } -vbfloat16m8_t test_vluxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { +vbfloat16m8_t test_vluxei16_v_bf16m8_tu(vbfloat16m8_t vd, const __bf16 *rs1, + vuint16m8_t rs2, size_t vl) { return __riscv_vluxei16_v_bf16m8_tu(vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vluxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tum(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16mf4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vluxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tum(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16mf2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vluxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1_t test_vluxei16_v_bf16m1_tum(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m1_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vluxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2_t test_vluxei16_v_bf16m2_tum(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vluxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4_t test_vluxei16_v_bf16m4_tum(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vluxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { +vbfloat16m8_t test_vluxei16_v_bf16m8_tum(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vluxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4_t test_vluxei16_v_bf16mf4_tumu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16mf4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vluxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2_t test_vluxei16_v_bf16mf2_tumu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16mf2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vluxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1_t test_vluxei16_v_bf16m1_tumu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m1_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vluxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2_t test_vluxei16_v_bf16m2_tumu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vluxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4_t test_vluxei16_v_bf16m4_tumu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vluxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { +vbfloat16m8_t test_vluxei16_v_bf16m8_tumu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4_t test_vluxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4_t test_vluxei16_v_bf16mf4_mu(vbool64_t vm, vbfloat16mf4_t vd, + const __bf16 *rs1, vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16mf4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2_t test_vluxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2_t test_vluxei16_v_bf16mf2_mu(vbool32_t vm, vbfloat16mf2_t vd, + const __bf16 *rs1, vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16mf2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1_t test_vluxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1_t test_vluxei16_v_bf16m1_mu(vbool16_t vm, vbfloat16m1_t vd, + const __bf16 *rs1, vuint16m1_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m1_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2_t test_vluxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2_t test_vluxei16_v_bf16m2_mu(vbool8_t vm, vbfloat16m2_t vd, + const __bf16 *rs1, vuint16m2_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m4_t test_vluxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4_t test_vluxei16_v_bf16m4_mu(vbool4_t vm, vbfloat16m4_t vd, + const __bf16 *rs1, vuint16m4_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m8_t test_vluxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, const __bf16 *rs1, vuint16m8_t rs2, size_t vl) { +vbfloat16m8_t test_vluxei16_v_bf16m8_mu(vbool2_t vm, vbfloat16m8_t vd, + const __bf16 *rs1, vuint16m8_t rs2, + size_t vl) { return __riscv_vluxei16_v_bf16m8_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c index 93cfa2358..f8d25ee01 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c @@ -7,82 +7,139 @@ #include -vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tu(vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16mf4x2_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tu(vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16mf2x2_tu(vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tu(vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m1x2_tu(vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tu(vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m2x2_tu(vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tu(vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m4x2_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tum(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_bf16mf4x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tum(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_bf16mf2x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tum(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m1x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tum(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m2x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tum(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m4x2_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_tumu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_bf16mf4x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_tumu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg2ei16_v_bf16mf2x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_tumu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m1x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_tumu(vbool8_t vm, + vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m2x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_tumu(vbool4_t vm, + vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m4x2_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, vbfloat16mf4x2_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x2_t test_vluxseg2ei16_v_bf16mf4x2_mu(vbool64_t vm, + vbfloat16mf4x2_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16mf4x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, vbfloat16mf2x2_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x2_t test_vluxseg2ei16_v_bf16mf2x2_mu(vbool32_t vm, + vbfloat16mf2x2_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16mf2x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, vbfloat16m1x2_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x2_t test_vluxseg2ei16_v_bf16m1x2_mu(vbool16_t vm, + vbfloat16m1x2_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m1x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x2_t test_vluxseg2ei16_v_bf16m2x2_mu(vbool8_t vm, vbfloat16m2x2_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m2x2_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, const __bf16 *rs1, vuint16m4_t rs2, size_t vl) { +vbfloat16m4x2_t test_vluxseg2ei16_v_bf16m4x2_mu(vbool4_t vm, vbfloat16m4x2_t vd, + const __bf16 *rs1, + vuint16m4_t rs2, size_t vl) { return __riscv_vluxseg2ei16_v_bf16m4x2_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c index 2214b6480..4d5b83c5d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c @@ -7,66 +7,113 @@ #include -vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tu(vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16mf4x3_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tu(vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16mf2x3_tu(vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tu(vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16m1x3_tu(vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tu(vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16m2x3_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tum(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_bf16mf4x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tum(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_bf16mf2x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tum(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16m1x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tum(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16m2x3_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_tumu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_bf16mf4x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_tumu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg3ei16_v_bf16mf2x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_tumu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16m1x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_tumu(vbool8_t vm, + vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16m2x3_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, vbfloat16mf4x3_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x3_t test_vluxseg3ei16_v_bf16mf4x3_mu(vbool64_t vm, + vbfloat16mf4x3_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16mf4x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, vbfloat16mf2x3_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x3_t test_vluxseg3ei16_v_bf16mf2x3_mu(vbool32_t vm, + vbfloat16mf2x3_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16mf2x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, vbfloat16m1x3_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x3_t test_vluxseg3ei16_v_bf16m1x3_mu(vbool16_t vm, + vbfloat16m1x3_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16m1x3_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x3_t test_vluxseg3ei16_v_bf16m2x3_mu(vbool8_t vm, vbfloat16m2x3_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg3ei16_v_bf16m2x3_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c index 0cd291e45..a6fd8bb87 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c @@ -7,66 +7,113 @@ #include -vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tu(vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16mf4x4_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tu(vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16mf2x4_tu(vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tu(vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16m1x4_tu(vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tu(vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16m2x4_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tum(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_bf16mf4x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tum(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_bf16mf2x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tum(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16m1x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tum(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16m2x4_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_tumu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_bf16mf4x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_tumu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg4ei16_v_bf16mf2x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_tumu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16m1x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_tumu(vbool8_t vm, + vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16m2x4_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, vbfloat16mf4x4_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x4_t test_vluxseg4ei16_v_bf16mf4x4_mu(vbool64_t vm, + vbfloat16mf4x4_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16mf4x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, vbfloat16mf2x4_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x4_t test_vluxseg4ei16_v_bf16mf2x4_mu(vbool32_t vm, + vbfloat16mf2x4_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16mf2x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, vbfloat16m1x4_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x4_t test_vluxseg4ei16_v_bf16m1x4_mu(vbool16_t vm, + vbfloat16m1x4_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16m1x4_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, const __bf16 *rs1, vuint16m2_t rs2, size_t vl) { +vbfloat16m2x4_t test_vluxseg4ei16_v_bf16m2x4_mu(vbool8_t vm, vbfloat16m2x4_t vd, + const __bf16 *rs1, + vuint16m2_t rs2, size_t vl) { return __riscv_vluxseg4ei16_v_bf16m2x4_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c index bf222f0ca..7daa96e44 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c @@ -7,50 +7,87 @@ #include -vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tu(vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_bf16mf4x5_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tu(vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_bf16mf2x5_tu(vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tu(vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_bf16m1x5_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tum(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_bf16mf4x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tum(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_bf16mf2x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tum(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_bf16m1x5_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_tumu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_bf16mf4x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_tumu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg5ei16_v_bf16mf2x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_tumu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_bf16m1x5_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, vbfloat16mf4x5_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x5_t test_vluxseg5ei16_v_bf16mf4x5_mu(vbool64_t vm, + vbfloat16mf4x5_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_bf16mf4x5_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, vbfloat16mf2x5_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x5_t test_vluxseg5ei16_v_bf16mf2x5_mu(vbool32_t vm, + vbfloat16mf2x5_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_bf16mf2x5_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, vbfloat16m1x5_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x5_t test_vluxseg5ei16_v_bf16m1x5_mu(vbool16_t vm, + vbfloat16m1x5_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg5ei16_v_bf16m1x5_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c index 1cf082fad..15b02a519 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c @@ -7,50 +7,87 @@ #include -vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tu(vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_bf16mf4x6_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tu(vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_bf16mf2x6_tu(vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tu(vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_bf16m1x6_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tum(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_bf16mf4x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tum(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_bf16mf2x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tum(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_bf16m1x6_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_tumu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_bf16mf4x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_tumu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg6ei16_v_bf16mf2x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_tumu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_bf16m1x6_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, vbfloat16mf4x6_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x6_t test_vluxseg6ei16_v_bf16mf4x6_mu(vbool64_t vm, + vbfloat16mf4x6_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_bf16mf4x6_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, vbfloat16mf2x6_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x6_t test_vluxseg6ei16_v_bf16mf2x6_mu(vbool32_t vm, + vbfloat16mf2x6_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_bf16mf2x6_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, vbfloat16m1x6_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x6_t test_vluxseg6ei16_v_bf16m1x6_mu(vbool16_t vm, + vbfloat16m1x6_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg6ei16_v_bf16m1x6_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c index 9c4c4aec8..dbabacce4 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c @@ -7,50 +7,87 @@ #include -vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tu(vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_bf16mf4x7_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tu(vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_bf16mf2x7_tu(vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tu(vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_bf16m1x7_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tum(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_bf16mf4x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tum(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_bf16mf2x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tum(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_bf16m1x7_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_tumu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_bf16mf4x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_tumu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg7ei16_v_bf16mf2x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_tumu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_bf16m1x7_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, vbfloat16mf4x7_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x7_t test_vluxseg7ei16_v_bf16mf4x7_mu(vbool64_t vm, + vbfloat16mf4x7_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_bf16mf4x7_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, vbfloat16mf2x7_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x7_t test_vluxseg7ei16_v_bf16mf2x7_mu(vbool32_t vm, + vbfloat16mf2x7_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_bf16mf2x7_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, vbfloat16m1x7_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x7_t test_vluxseg7ei16_v_bf16m1x7_mu(vbool16_t vm, + vbfloat16m1x7_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg7ei16_v_bf16m1x7_mu(vm, vd, rs1, rs2, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c index ecee70c35..c11093d1a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c @@ -7,50 +7,87 @@ #include -vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tu(vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_bf16mf4x8_tu(vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tu(vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_bf16mf2x8_tu(vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tu(vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_bf16m1x8_tu(vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tum(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_bf16mf4x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tum(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_bf16mf2x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tum(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_bf16m1x8_tum(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_tumu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_bf16mf4x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_tumu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, + size_t vl) { return __riscv_vluxseg8ei16_v_bf16mf2x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_tumu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_bf16m1x8_tumu(vm, vd, rs1, rs2, vl); } -vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, vbfloat16mf4x8_t vd, const __bf16 *rs1, vuint16mf4_t rs2, size_t vl) { +vbfloat16mf4x8_t test_vluxseg8ei16_v_bf16mf4x8_mu(vbool64_t vm, + vbfloat16mf4x8_t vd, + const __bf16 *rs1, + vuint16mf4_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_bf16mf4x8_mu(vm, vd, rs1, rs2, vl); } -vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, vbfloat16mf2x8_t vd, const __bf16 *rs1, vuint16mf2_t rs2, size_t vl) { +vbfloat16mf2x8_t test_vluxseg8ei16_v_bf16mf2x8_mu(vbool32_t vm, + vbfloat16mf2x8_t vd, + const __bf16 *rs1, + vuint16mf2_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_bf16mf2x8_mu(vm, vd, rs1, rs2, vl); } -vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, vbfloat16m1x8_t vd, const __bf16 *rs1, vuint16m1_t rs2, size_t vl) { +vbfloat16m1x8_t test_vluxseg8ei16_v_bf16m1x8_mu(vbool16_t vm, + vbfloat16m1x8_t vd, + const __bf16 *rs1, + vuint16m1_t rs2, size_t vl) { return __riscv_vluxseg8ei16_v_bf16m1x8_mu(vm, vd, rs1, rs2, vl); } From efb76bf81af16e203d7375b4503bcde7016cdaee Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Tue, 30 Jul 2024 21:56:24 +0800 Subject: [PATCH 104/151] Update description for vxsat --- doc/rvv-intrinsic-spec.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/rvv-intrinsic-spec.adoc b/doc/rvv-intrinsic-spec.adoc index 4685f1a53..7c9c5570e 100644 --- a/doc/rvv-intrinsic-spec.adoc +++ b/doc/rvv-intrinsic-spec.adoc @@ -109,9 +109,9 @@ NOTE: The RISC-V psABI cite:[riscv-cc-vector] states that `vxrm` is not preserve [NOTE] ==== -This version of the specification of does not cover the control of the vector fixed-point saturation flag (`vxsat`). Support for this feature is planned for a later version of the specification in a way that is compatible with existing fixed-point intrinsics. No mechanism to set or retrieve the value of `vxsat` is specified either. +This specification does not provide support for manipulating the `vxsat` CSR. Since vxsat is not needed by a large majority of fixed-point code, we believe this specification is broadly useful as-is. Nevertheless, we expect that a future extension will define an additional set of fixed-point intrinsics that update `vxsat` in a specified manner, along with intrinsics to explicitly read and write `vxsat`. These new intrinsics would be interoperable with the intrinsics in this specification. -The value of the `vxsat` after a fixed-point intrinsic is UNSPECIFIED. This includes the order in which the flag `vxsat` is updated in a program that executes a sequence of fixed-point intrinsics. +The value of the `vxsat` after a fixed-point intrinsic is UNSPECIFIED. ==== [[control-of-frm]] From d6ef4a9a0eba04434b17721a83d1736810fbe404 Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Tue, 30 Jul 2024 21:56:48 +0800 Subject: [PATCH 105/151] Add new parameter to function_group for adding description --- .../rvv_intrinsic_gen/generator.py | 32 ++++++++++++------- .../templates/binary_intcarry_template.py | 3 +- .../templates/binary_nop_template.py | 3 +- .../templates/binary_op_template.py | 3 +- .../templates/binary_wop_template.py | 3 +- .../templates/cmp_template.py | 3 +- .../templates/cvt_op_template.py | 3 +- .../get_set_diff_lmul_op_template.py | 3 +- .../templates/load_template.py | 3 +- .../templates/mac_template.py | 3 +- .../templates/mask_load_store_template.py | 3 +- .../templates/mask_template.py | 3 +- .../templates/misc_op_template.py | 3 +- .../templates/permute_template.py | 3 +- .../templates/reduction_template.py | 3 +- .../templates/reint_op_template.py | 3 +- .../templates/seg_load_template.py | 3 +- .../templates/seg_store_template.py | 3 +- .../templates/setvl_template.py | 3 +- .../templates/store_template.py | 3 +- .../templates/unary_op_template.py | 3 +- .../templates/vector_crypto_template.py | 3 +- 22 files changed, 62 insertions(+), 33 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index e3ac88487..bb905fb79 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -56,7 +56,7 @@ def func(self, inst_info, name, return_type, **kwargs): return NotImplemented def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list): + lmul_list, decorator_list, description=None): # pylint: disable=unused-argument # NOTE: 'title' and 'link' are only used in DocGenerator and # OverloadedDocGenerator. Probably need some decoupling here. @@ -66,7 +66,8 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, type_list=type_list, sew_list=sew_list, lmul_list=lmul_list, - decorator_list=decorator_list) + decorator_list=decorator_list, + description=description) def start_group(self, group_name): raise NotImplementedError @@ -294,6 +295,9 @@ def report_summary(self): def post_gen(self): raise NotImplementedError + def emit_function_group_description(self, description): + pass + class DocGenerator(Generator): """ @@ -339,7 +343,7 @@ def inst_group_epilogue(self): return s def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list): + lmul_list, decorator_list, description=None): self.write_title(title, link) if self.has_tail_policy and len(decorator_list) == 0: s = "Intrinsics here don't have a policy variant.\n" @@ -350,7 +354,7 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, return super().function_group(template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list) + lmul_list, decorator_list, description=description) def func(self, inst_info, name, return_type, **kwargs): name = Generator.func_name(name) @@ -383,6 +387,9 @@ def start_group(self, group_name): os.path.join(self.folder, file_name), "w", encoding="utf-8") self.write(f"\n=== {group_name}\n") + def emit_function_group_description(self, description): + if description: + self.write(f"{description}\n"); class OverloadedDocGenerator(DocGenerator): """ @@ -397,13 +404,13 @@ def write_title(self, text, link): self.fd.write("\n[[overloaded-" + link + "]]\n==== " + text + "\n") def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list): + lmul_list, decorator_list, description=None): self.do_not_have_overloaded_variant = True for op in op_list: if Generator.is_support_overloaded(op): self.do_not_have_overloaded_variant = False super().function_group(template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list) + lmul_list, decorator_list, description=description) def func(self, inst_info, name, return_type, **kwargs): func_name = Generator.func_name(name) @@ -658,7 +665,7 @@ def post_gen(self): self.fd.close() def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list): + lmul_list, decorator_list, description=None): self.test_file_names = op_list template.render( G=self, @@ -666,7 +673,7 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, type_list=type_list, sew_list=sew_list, lmul_list=lmul_list, - decorator_list=decorator_list) + decorator_list=decorator_list, description=description) class Grouper(Generator): @@ -713,7 +720,7 @@ def query_group_desc(self, func_name): return self.func_group[func_name] def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list): + lmul_list, decorator_list, description=None): self.op_list = op_list self.groups[self.current_group].append(title) self.current_sub_group = title @@ -723,7 +730,8 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, type_list=type_list, sew_list=sew_list, lmul_list=lmul_list, - decorator_list=decorator_list) + decorator_list=decorator_list, + description=description) class CompatibleHeaderGenerator(Generator): @@ -862,11 +870,11 @@ def inst_group_epilogue(self): return "" def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list): + lmul_list, decorator_list, description=None): if self.has_tail_policy and len(decorator_list) == 0: return super().function_group(template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list) + lmul_list, decorator_list, description=description) @staticmethod def is_policy_func(inst_info): diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py index af00f7700..5e4881af0 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py @@ -26,9 +26,10 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py index a83f3e1eb..905666f74 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py @@ -32,9 +32,10 @@ def must_int_type(**kargs): # narrowing op template -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py index 3410a7d53..d126537ac 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py @@ -28,9 +28,10 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py index f6bb93f87..4b840d02d 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py @@ -26,9 +26,10 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py index 7ad320038..410a709fb 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py @@ -26,9 +26,10 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py index 12ac356ab..ab3da1d23 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py @@ -28,11 +28,12 @@ from constants import ITYPES -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'type_list' is unused but required for interface # consistency. We can prune it in the future. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py index 2f4f10638..bef8f07f6 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py @@ -50,9 +50,10 @@ def vset_constraint(**kargs): and int(kargs["LMUL"]) > int(kargs["SRC_LMUL"]) -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py index ee73dc9c0..683ae16e1 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py @@ -29,9 +29,10 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py index 36b274e1a..e258480e1 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py @@ -27,9 +27,10 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py index e222b4635..d426732c7 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py @@ -26,11 +26,12 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'lmul_list' is unused but required for interface # consistency. We can prune it in the future. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py index 9e10bbbb1..20997c508 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py @@ -25,9 +25,10 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py index 123f1bf11..66153e9c1 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py @@ -30,9 +30,10 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() # vundefine for non-tuple for decorator in decorator_list: diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py index ad79af2da..4f8a00be6 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py @@ -26,9 +26,10 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py index 8d66fe4a9..20fbf44cc 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py @@ -27,9 +27,10 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py index e10c5395a..8c78e5528 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py @@ -27,9 +27,10 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py index 5741b680d..93de5585d 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py @@ -32,9 +32,10 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() nf_list = range(2, 9) for decorator in decorator_list: diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py index a95f99fbd..3cb32c427 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py @@ -32,9 +32,10 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() nf_list = range(2, 9) for decorator in decorator_list: diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py index f7f7ad9ac..64c9286aa 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py @@ -25,11 +25,12 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'type_list', 'decorator_list' is unused but required for # interface consistency. We can prune it in the future. + G.emit_function_group_description(description) G.inst_group_prologue() for args in prod(OP=op_list, SEW=sew_list, LMUL=lmul_list): type_helper = TypeHelper(**args) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py index 12d3136d2..6f299e299 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py @@ -28,9 +28,10 @@ from enums import MemType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py index 1253dbcc2..41023c94e 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py @@ -28,9 +28,10 @@ import copy -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: decorator.write_text_header(G) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 02644ca97..60c866805 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -74,9 +74,10 @@ def has_rs1_input(name): return name in has_rs1_input_inst_set -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. + G.emit_function_group_description(description) G.inst_group_prologue() for decorator in decorator_list: From 2393cbcf427e3e2a2665eb12e66431d4bfd9128c Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Tue, 30 Jul 2024 22:00:46 +0800 Subject: [PATCH 106/151] Add description to those intrinsic function which may interact with vxsat --- rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py index fe2b1b07f..cb969ef28 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py @@ -236,12 +236,15 @@ def gen(g): #################################################################### g.start_group("Vector Fixed-Point Arithmetic Intrinsics") + vxsat_description = "After executing an intrinsic in this section, " + \ + "the `vxsat` CSR assumes an UNSPECIFIED value."; g.function_group( binary_op_template, "Vector Single-Width Saturating Add and Subtract Intrinsics", "vector-single-width-saturating-add-and-subtract", ["sadd", "ssub"], - ITYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy) + ITYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy, + description=vxsat_description) g.function_group(binary_op_template, "Vector Single-Width Averaging Add and Subtract Intrinsics", @@ -255,7 +258,8 @@ def gen(g): "Intrinsics", "vector-single-width-fractional-multiply-with-rounding-and-" + "saturation", ["smul"], ["int"], SEWS, LMULS, - decorators.has_masking_maskedoff_policy_vxrm) + decorators.has_masking_maskedoff_policy_vxrm, + description=vxsat_description) g.function_group(binary_op_template, "Vector Single-Width Scaling Shift Intrinsics", @@ -266,7 +270,8 @@ def gen(g): g.function_group(binary_nop_template, "Vector Narrowing Fixed-Point Clip Intrinsics", "vector-narrowing-fixed-point-clip", ["nclip"], ITYPES, - WSEWS, WLMULS, decorators.has_masking_maskedoff_policy_vxrm) + WSEWS, WLMULS, decorators.has_masking_maskedoff_policy_vxrm, + description=vxsat_description) #################################################################### g.start_group("Vector Floating-Point Intrinsics") From a119021bbc25e0571f5bbe7aa2ad110c4ec5a054 Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Fri, 2 Aug 2024 14:33:58 +0800 Subject: [PATCH 107/151] [Auto-gen] Update documents under ../auto-generated. (make git-commit-autogen-doc) --- auto-generated/intrinsic_funcs.adoc | 3 +++ .../03_vector_fixed-point_arithmetic_intrinsics.adoc | 3 +++ auto-generated/overloaded_intrinsic_funcs.adoc | 3 +++ .../03_vector_fixed-point_arithmetic_intrinsics.adoc | 3 +++ auto-generated/policy_funcs/intrinsic_funcs.adoc | 3 +++ .../03_vector_fixed-point_arithmetic_intrinsics.adoc | 3 +++ auto-generated/policy_funcs/overloaded_intrinsic_funcs.adoc | 3 +++ .../03_vector_fixed-point_arithmetic_intrinsics.adoc | 3 +++ 8 files changed, 24 insertions(+) diff --git a/auto-generated/intrinsic_funcs.adoc b/auto-generated/intrinsic_funcs.adoc index 40ce7be66..4538c7f44 100644 --- a/auto-generated/intrinsic_funcs.adoc +++ b/auto-generated/intrinsic_funcs.adoc @@ -35506,6 +35506,7 @@ vuint64m8_t __riscv_vmv_v_x_u64m8(uint64_t rs1, size_t vl); [[vector-single-width-saturating-add-and-subtract]] ==== Vector Single-Width Saturating Add and Subtract Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -36904,6 +36905,7 @@ vuint64m8_t __riscv_vasubu_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, [[vector-single-width-fractional-multiply-with-rounding-and-saturation]] ==== Vector Single-Width Fractional Multiply with Rounding and SaturationIntrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -37502,6 +37504,7 @@ vuint64m8_t __riscv_vssrl_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, [[vector-narrowing-fixed-point-clip]] ==== Vector Narrowing Fixed-Point Clip Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- diff --git a/auto-generated/intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc b/auto-generated/intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc index 13bf79e2e..4be2c6e5c 100644 --- a/auto-generated/intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc +++ b/auto-generated/intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc @@ -3,6 +3,7 @@ [[vector-single-width-saturating-add-and-subtract]] ==== Vector Single-Width Saturating Add and Subtract Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -1401,6 +1402,7 @@ vuint64m8_t __riscv_vasubu_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, [[vector-single-width-fractional-multiply-with-rounding-and-saturation]] ==== Vector Single-Width Fractional Multiply with Rounding and SaturationIntrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -1999,6 +2001,7 @@ vuint64m8_t __riscv_vssrl_vx_u64m8_m(vbool8_t vm, vuint64m8_t vs2, size_t rs1, [[vector-narrowing-fixed-point-clip]] ==== Vector Narrowing Fixed-Point Clip Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- diff --git a/auto-generated/overloaded_intrinsic_funcs.adoc b/auto-generated/overloaded_intrinsic_funcs.adoc index 5e71ee363..0a04e29f9 100644 --- a/auto-generated/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/overloaded_intrinsic_funcs.adoc @@ -28761,6 +28761,7 @@ vuint64m8_t __riscv_vmv_v(vuint64m8_t vs1, size_t vl); [[overloaded-vector-single-width-saturating-add-and-subtract]] ==== Vector Single-Width Saturating Add and Subtract Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -29955,6 +29956,7 @@ vuint64m8_t __riscv_vasubu(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, [[overloaded-vector-single-width-fractional-multiply-with-rounding-and-saturation]] ==== Vector Single-Width Fractional Multiply with Rounding and SaturationIntrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -30499,6 +30501,7 @@ vuint64m8_t __riscv_vssrl(vbool8_t vm, vuint64m8_t vs2, size_t rs1, [[overloaded-vector-narrowing-fixed-point-clip]] ==== Vector Narrowing Fixed-Point Clip Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- diff --git a/auto-generated/overloaded_intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc b/auto-generated/overloaded_intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc index 59bcf0473..f79388153 100644 --- a/auto-generated/overloaded_intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc +++ b/auto-generated/overloaded_intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc @@ -3,6 +3,7 @@ [[overloaded-vector-single-width-saturating-add-and-subtract]] ==== Vector Single-Width Saturating Add and Subtract Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -1197,6 +1198,7 @@ vuint64m8_t __riscv_vasubu(vbool8_t vm, vuint64m8_t vs2, uint64_t rs1, [[overloaded-vector-single-width-fractional-multiply-with-rounding-and-saturation]] ==== Vector Single-Width Fractional Multiply with Rounding and SaturationIntrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -1741,6 +1743,7 @@ vuint64m8_t __riscv_vssrl(vbool8_t vm, vuint64m8_t vs2, size_t rs1, [[overloaded-vector-narrowing-fixed-point-clip]] ==== Vector Narrowing Fixed-Point Clip Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- diff --git a/auto-generated/policy_funcs/intrinsic_funcs.adoc b/auto-generated/policy_funcs/intrinsic_funcs.adoc index 4140c7dd5..4856fb540 100644 --- a/auto-generated/policy_funcs/intrinsic_funcs.adoc +++ b/auto-generated/policy_funcs/intrinsic_funcs.adoc @@ -61366,6 +61366,7 @@ vuint64m8_t __riscv_vmv_v_x_u64m8_tu(vuint64m8_t vd, uint64_t rs1, size_t vl); [[policy-variant-vector-single-width-saturating-add-and-subtract]] ==== Vector Single-Width Saturating Add and Subtract Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -65116,6 +65117,7 @@ vuint64m8_t __riscv_vasubu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, [[policy-variant-vector-single-width-fractional-multiply-with-rounding-and-saturation]] ==== Vector Single-Width Fractional Multiply with Rounding and SaturationIntrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -66607,6 +66609,7 @@ vuint64m8_t __riscv_vssrl_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, [[policy-variant-vector-narrowing-fixed-point-clip]] ==== Vector Narrowing Fixed-Point Clip Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- diff --git a/auto-generated/policy_funcs/intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc b/auto-generated/policy_funcs/intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc index 5e8183297..7b1985e13 100644 --- a/auto-generated/policy_funcs/intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc +++ b/auto-generated/policy_funcs/intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc @@ -3,6 +3,7 @@ [[policy-variant-vector-single-width-saturating-add-and-subtract]] ==== Vector Single-Width Saturating Add and Subtract Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -3753,6 +3754,7 @@ vuint64m8_t __riscv_vasubu_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, [[policy-variant-vector-single-width-fractional-multiply-with-rounding-and-saturation]] ==== Vector Single-Width Fractional Multiply with Rounding and SaturationIntrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -5244,6 +5246,7 @@ vuint64m8_t __riscv_vssrl_vx_u64m8_mu(vbool8_t vm, vuint64m8_t vd, [[policy-variant-vector-narrowing-fixed-point-clip]] ==== Vector Narrowing Fixed-Point Clip Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- diff --git a/auto-generated/policy_funcs/overloaded_intrinsic_funcs.adoc b/auto-generated/policy_funcs/overloaded_intrinsic_funcs.adoc index 33d43db18..aa9efac4b 100644 --- a/auto-generated/policy_funcs/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/policy_funcs/overloaded_intrinsic_funcs.adoc @@ -51495,6 +51495,7 @@ vuint64m8_t __riscv_vmv_v_tu(vuint64m8_t vd, uint64_t rs1, size_t vl); [[policy-variant-overloadedvector-single-width-saturating-add-and-subtract]] ==== Vector Single-Width Saturating Add and Subtract Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -54335,6 +54336,7 @@ vuint64m8_t __riscv_vasubu_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, [[policy-variant-overloadedvector-single-width-fractional-multiply-with-rounding-and-saturation]] ==== Vector Single-Width Fractional Multiply with Rounding and SaturationIntrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -55411,6 +55413,7 @@ vuint64m8_t __riscv_vssrl_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, [[policy-variant-overloadedvector-narrowing-fixed-point-clip]] ==== Vector Narrowing Fixed-Point Clip Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- diff --git a/auto-generated/policy_funcs/overloaded_intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc b/auto-generated/policy_funcs/overloaded_intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc index db2c21119..f1570468b 100644 --- a/auto-generated/policy_funcs/overloaded_intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc +++ b/auto-generated/policy_funcs/overloaded_intrinsic_funcs/03_vector_fixed-point_arithmetic_intrinsics.adoc @@ -3,6 +3,7 @@ [[policy-variant-overloadedvector-single-width-saturating-add-and-subtract]] ==== Vector Single-Width Saturating Add and Subtract Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -2843,6 +2844,7 @@ vuint64m8_t __riscv_vasubu_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, [[policy-variant-overloadedvector-single-width-fractional-multiply-with-rounding-and-saturation]] ==== Vector Single-Width Fractional Multiply with Rounding and SaturationIntrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- @@ -3919,6 +3921,7 @@ vuint64m8_t __riscv_vssrl_mu(vbool8_t vm, vuint64m8_t vd, vuint64m8_t vs2, [[policy-variant-overloadedvector-narrowing-fixed-point-clip]] ==== Vector Narrowing Fixed-Point Clip Intrinsics +After executing an intrinsic in this section, the `vxsat` CSR assumes an UNSPECIFIED value. [,c] ---- From 639f9a61ca461388bb7b9fb4caacedd29d393a30 Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Fri, 2 Aug 2024 17:15:15 +0800 Subject: [PATCH 108/151] Apply yapf --- .../rvv_intrinsic_gen/generator.py | 114 +++++++++++++++--- .../rvv_intrinsic_gen/inst.py | 25 ++-- .../templates/binary_intcarry_template.py | 3 +- .../templates/binary_nop_template.py | 3 +- .../templates/binary_op_template.py | 3 +- .../templates/binary_wop_template.py | 3 +- .../templates/cmp_template.py | 3 +- .../templates/cvt_op_template.py | 3 +- .../get_set_diff_lmul_op_template.py | 3 +- .../templates/load_template.py | 3 +- .../templates/mac_template.py | 3 +- .../templates/mask_load_store_template.py | 3 +- .../templates/mask_template.py | 3 +- .../templates/misc_op_template.py | 3 +- .../templates/permute_template.py | 3 +- .../templates/reduction_template.py | 3 +- .../templates/reint_op_template.py | 3 +- .../templates/seg_load_template.py | 3 +- .../templates/seg_store_template.py | 3 +- .../templates/setvl_template.py | 3 +- .../templates/store_template.py | 3 +- .../templates/unary_op_template.py | 3 +- .../templates/vector_crypto_template.py | 3 +- 23 files changed, 153 insertions(+), 49 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index bb905fb79..8fbadc8e3 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -55,8 +55,16 @@ def inst_group_epilogue(self): def func(self, inst_info, name, return_type, **kwargs): return NotImplemented - def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=None): + def function_group(self, + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=None): # pylint: disable=unused-argument # NOTE: 'title' and 'link' are only used in DocGenerator and # OverloadedDocGenerator. Probably need some decoupling here. @@ -342,8 +350,16 @@ def inst_group_epilogue(self): self.write(s) return s - def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=None): + def function_group(self, + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=None): self.write_title(title, link) if self.has_tail_policy and len(decorator_list) == 0: s = "Intrinsics here don't have a policy variant.\n" @@ -353,8 +369,16 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, self.write("Intrinsics here don't have an overloaded variant.\n") return - super().function_group(template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=description) + super().function_group( + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=description) def func(self, inst_info, name, return_type, **kwargs): name = Generator.func_name(name) @@ -389,7 +413,8 @@ def start_group(self, group_name): def emit_function_group_description(self, description): if description: - self.write(f"{description}\n"); + self.write(f"{description}\n") + class OverloadedDocGenerator(DocGenerator): """ @@ -403,14 +428,30 @@ def write_title(self, text, link): else: self.fd.write("\n[[overloaded-" + link + "]]\n==== " + text + "\n") - def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=None): + def function_group(self, + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=None): self.do_not_have_overloaded_variant = True for op in op_list: if Generator.is_support_overloaded(op): self.do_not_have_overloaded_variant = False - super().function_group(template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=description) + super().function_group( + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=description) def func(self, inst_info, name, return_type, **kwargs): func_name = Generator.func_name(name) @@ -664,8 +705,16 @@ def post_gen(self): self.fd.write(dg_pattern_str) self.fd.close() - def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=None): + def function_group(self, + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=None): self.test_file_names = op_list template.render( G=self, @@ -673,7 +722,8 @@ def function_group(self, template, title, link, op_list, type_list, sew_list, type_list=type_list, sew_list=sew_list, lmul_list=lmul_list, - decorator_list=decorator_list, description=description) + decorator_list=decorator_list, + description=description) class Grouper(Generator): @@ -719,8 +769,16 @@ def func(self, inst_info, name, return_type, **kwargs): def query_group_desc(self, func_name): return self.func_group[func_name] - def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=None): + def function_group(self, + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=None): self.op_list = op_list self.groups[self.current_group].append(title) self.current_sub_group = title @@ -869,12 +927,28 @@ def inst_group_prologue(self): def inst_group_epilogue(self): return "" - def function_group(self, template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=None): + def function_group(self, + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=None): if self.has_tail_policy and len(decorator_list) == 0: return - super().function_group(template, title, link, op_list, type_list, sew_list, - lmul_list, decorator_list, description=description) + super().function_group( + template, + title, + link, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description=description) @staticmethod def is_policy_func(inst_info): diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py index cb969ef28..4acb6701f 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/inst.py @@ -237,13 +237,16 @@ def gen(g): #################################################################### g.start_group("Vector Fixed-Point Arithmetic Intrinsics") vxsat_description = "After executing an intrinsic in this section, " + \ - "the `vxsat` CSR assumes an UNSPECIFIED value."; + "the `vxsat` CSR assumes an UNSPECIFIED value." g.function_group( binary_op_template, "Vector Single-Width Saturating Add and Subtract Intrinsics", "vector-single-width-saturating-add-and-subtract", ["sadd", "ssub"], - ITYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy, + ITYPES, + SEWS, + LMULS, + decorators.has_masking_maskedoff_policy, description=vxsat_description) g.function_group(binary_op_template, @@ -257,7 +260,9 @@ def gen(g): "Vector Single-Width Fractional Multiply with Rounding and Saturation" + "Intrinsics", "vector-single-width-fractional-multiply-with-rounding-and-" + - "saturation", ["smul"], ["int"], SEWS, LMULS, + "saturation", ["smul"], ["int"], + SEWS, + LMULS, decorators.has_masking_maskedoff_policy_vxrm, description=vxsat_description) @@ -267,11 +272,15 @@ def gen(g): ITYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy_vxrm) - g.function_group(binary_nop_template, - "Vector Narrowing Fixed-Point Clip Intrinsics", - "vector-narrowing-fixed-point-clip", ["nclip"], ITYPES, - WSEWS, WLMULS, decorators.has_masking_maskedoff_policy_vxrm, - description=vxsat_description) + g.function_group( + binary_nop_template, + "Vector Narrowing Fixed-Point Clip Intrinsics", + "vector-narrowing-fixed-point-clip", ["nclip"], + ITYPES, + WSEWS, + WLMULS, + decorators.has_masking_maskedoff_policy_vxrm, + description=vxsat_description) #################################################################### g.start_group("Vector Floating-Point Intrinsics") diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py index 5e4881af0..12143face 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py @@ -26,7 +26,8 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py index 905666f74..1ca61715e 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py @@ -32,7 +32,8 @@ def must_int_type(**kargs): # narrowing op template -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py index d126537ac..0f356f962 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py @@ -28,7 +28,8 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py index 4b840d02d..3e68d3c8c 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py @@ -26,7 +26,8 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py index 410a709fb..61361648f 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py @@ -26,7 +26,8 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py index ab3da1d23..eee862dd6 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py @@ -28,7 +28,8 @@ from constants import ITYPES -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'type_list' is unused but required for interface diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py index bef8f07f6..06e980d0d 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py @@ -50,7 +50,8 @@ def vset_constraint(**kargs): and int(kargs["LMUL"]) > int(kargs["SRC_LMUL"]) -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py index 683ae16e1..574e0d882 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py @@ -29,7 +29,8 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py index e258480e1..490695b67 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py @@ -27,7 +27,8 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py index d426732c7..7981a07e3 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py @@ -26,7 +26,8 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'lmul_list' is unused but required for interface diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py index 20997c508..8baed1b5a 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py @@ -25,7 +25,8 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py index 66153e9c1..fd36bc3d3 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py @@ -30,7 +30,8 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py index 4f8a00be6..ca0de2f30 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py @@ -26,7 +26,8 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py index 20fbf44cc..3f61bf497 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py @@ -27,7 +27,8 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py index 8c78e5528..a2e653880 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py @@ -27,7 +27,8 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py index 93de5585d..9e52fd0f8 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py @@ -32,7 +32,8 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py index 3cb32c427..4ea46f031 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py @@ -32,7 +32,8 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py index 64c9286aa..43d2f3691 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py @@ -25,7 +25,8 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'type_list', 'decorator_list' is unused but required for diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py index 6f299e299..524cf1134 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py @@ -28,7 +28,8 @@ from enums import MemType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py index 41023c94e..e69752b5c 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py @@ -28,7 +28,8 @@ import copy -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 60c866805..429269e29 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -74,7 +74,8 @@ def has_rs1_input(name): return name in has_rs1_input_inst_set -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, description): +def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, + description): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) From 339bdb89b364ad0c5eb9ce8dbe928c1262938ed3 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 22:30:53 -0700 Subject: [PATCH 109/151] makefile-api: add Zvk and BF16 extensions into march Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/Makefile.api | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/Makefile.api b/rvv-intrinsic-generator/Makefile.api index 79ed230f9..94affa72c 100644 --- a/rvv-intrinsic-generator/Makefile.api +++ b/rvv-intrinsic-generator/Makefile.api @@ -14,8 +14,8 @@ # limitations under the License. ############################################################################### -CFLAGS?=-O -Werror=implicit-function-declaration -ARCH_FLAG?=-march=rv64gcv_zfh_zvfh +CFLAGS?=-O -Werror=implicit-function-declaration -menable-experimental-extensions +ARCH_FLAG?=-march=rv64gcv_zfh_zvbb_zvbc_zvfbfmin_zvfbfwma_zvfh_zvkng_zvksg_zvl512b EXTRA_CFLAGS?= TEST_MULTILIB:=rv32gcv-ilp32d,rv64gcv-lp64d From 67beb772e326862eab3ee657e9d9cdf609599050 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 22:32:42 -0700 Subject: [PATCH 110/151] report: enable Zvk and BF16 grouping Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report b/rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report index 14d314f92..68024cd3f 100755 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report @@ -27,8 +27,10 @@ from junitparser import JUnitXml, TestSuite, TestCase, Skipped, Error, Failure sys.path = [os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))] + sys.path +import bfloat16_inst import generator import inst +import vector_crypto_inst class bcolors: HEADER = '\033[95m' @@ -287,6 +289,8 @@ def parse_args(args): if __name__ == "__main__": g = generator.Grouper() inst.gen(g) + bfloat16_inst.gen(g) + vector_crypto_inst.gen(g) stats = dict() for grp, subgrps in g.groups.items(): From 7dc47341dc9035698c84cca96790e6cbc731ff77 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 22:33:11 -0700 Subject: [PATCH 111/151] makefile: add Zvk and BF16 API test targets Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/Makefile | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index 45f165870..dbd9b1f82 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -498,6 +498,38 @@ run-policy-overloaded-compatible-api-testing: $(LEGACY_API_TESTS_DIR)/policy-overloaded-api-testing $(call run_tests,$(LEGACY_API_TESTS_DIR)/policy-overloaded-api-testing,${COMPILER}) +run-bfloat16-api-testing: + $(call check_defined, COMPILER, compiler (clang/gcc)) + $(call run_tests,${DIR}/bfloat16/api-testing,${COMPILER}) + +run-bfloat16-overloaded-api-testing: + $(call check_defined, COMPILER, compiler (clang/gcc)) + $(call run_tests,${DIR}/bfloat16/overloaded-api-testing,${COMPILER}) + +run-bfloat16-policy-api-testing: + $(call check_defined, COMPILER, compiler (clang/gcc)) + $(call run_tests,${DIR}/bfloat16/policy_funcs/api-testing,${COMPILER}) + +run-bfloat16-policy-overloaded-api-testing: + $(call check_defined, COMPILER, compiler (clang/gcc)) + $(call run_tests,${DIR}/bfloat16/policy_funcs/overloaded-api-testing,${COMPILER}) + +run-vector-crypto-api-testing: + $(call check_defined, COMPILER, compiler (clang/gcc)) + $(call run_tests,${DIR}/vector-crypto/api-testing,${COMPILER}) + +run-vector-crypto-overloaded-api-testing: + $(call check_defined, COMPILER, compiler (clang/gcc)) + $(call run_tests,${DIR}/vector-crypto/overloaded-api-testing,${COMPILER}) + +run-vector-crypto-policy-api-testing: + $(call check_defined, COMPILER, compiler (clang/gcc)) + $(call run_tests,${DIR}/vector-crypto/policy_funcs/api-testing,${COMPILER}) + +run-vector-crypto-policy-overloaded-api-testing: + $(call check_defined, COMPILER, compiler (clang/gcc)) + $(call run_tests,${DIR}/vector-crypto/policy_funcs/overloaded-api-testing,${COMPILER}) + # A parameterized target to run testing through testing-report. # Makes target 'test' of ${API_MAKEFILE} with ${TESTING_REPORT_SCRIPT} under # ${API_DIR}. Requires ${API_DIR}, ${API_MAKEFILE}, ${TESTING_REPORT_SCRIPT} From e52a10df898bd2a3974fa2d3e4a88e62573ce823 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 22:52:18 -0700 Subject: [PATCH 112/151] github: enable clang BF16 and Zvk API tests in CI Signed-off-by: Jerry Zhang Jian --- .github/workflows/clang-compilation.yml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/.github/workflows/clang-compilation.yml b/.github/workflows/clang-compilation.yml index ad6b0efa6..decd40140 100644 --- a/.github/workflows/clang-compilation.yml +++ b/.github/workflows/clang-compilation.yml @@ -37,13 +37,13 @@ jobs: ls bin - name: Run compilation test, non-overloaded intrinsics (default (TAMA) policy) run: | - make -C rvv-intrinsic-generator run-api-testing COMPILER=$(pwd)/../llvm-project/build/bin/clang EXTRA_CFLAGS="-target riscv64" + make -C rvv-intrinsic-generator run-api-testing run-bfloat16-api-testing run-vector-crypto-api-testing COMPILER=$(pwd)/../llvm-project/build/bin/clang EXTRA_CFLAGS="-target riscv64" - name: Run compilation test, overloaded intrinsics (default (TAMA) policy) run: | - make -C rvv-intrinsic-generator run-overloaded-api-testing COMPILER=$(pwd)/../llvm-project/build/bin/clang EXTRA_CFLAGS="-target riscv64" + make -C rvv-intrinsic-generator run-overloaded-api-testing run-bfloat16-overloaded-api-testing run-vector-crypto-overloaded-api-testing COMPILER=$(pwd)/../llvm-project/build/bin/clang EXTRA_CFLAGS="-target riscv64" - name: Run compilation test, non-overloaded intrinsics (non-default policy) run: | - make -C rvv-intrinsic-generator run-policy-api-testing COMPILER=$(pwd)/../llvm-project/build/bin/clang EXTRA_CFLAGS="-target riscv64" + make -C rvv-intrinsic-generator run-policy-api-testing run-bfloat16-policy-api-testing run-vector-crypto-policy-api-testing COMPILER=$(pwd)/../llvm-project/build/bin/clang EXTRA_CFLAGS="-target riscv64" - name: Run compilation test, overloaded intrinsics (non-default policy) run: | - make -C rvv-intrinsic-generator run-policy-overloaded-api-testing COMPILER=$(pwd)/../llvm-project/build/bin/clang EXTRA_CFLAGS="-target riscv64" + make -C rvv-intrinsic-generator run-policy-overloaded-api-testing run-bfloat16-policy-overloaded-api-testing run-vector-crypto-policy-overloaded-api-testing COMPILER=$(pwd)/../llvm-project/build/bin/clang EXTRA_CFLAGS="-target riscv64" From 21d7e5bc58597a702aa478fcac0c3dc7f7879d1d Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Wed, 4 Sep 2024 09:13:58 +0800 Subject: [PATCH 113/151] Import CONTRIBUTING.md from docs-spec-template Sync with the docs-spec-template, and it request from the ratificaiton flow, which checking the document structure. --- CONTRIBUTING.md | 58 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 CONTRIBUTING.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 000000000..1d98c72b6 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,58 @@ +# Contribution Guidelines + +As an open-source project, we appreciate and encourage community members to submit patches directly to the project. To maintain a well-organized development environment, we have established standards and methods for submitting changes. This document outlines the process for submitting patches to the project, ensuring that your contribution is swiftly incorporated into the codebase. + +# Licensing + +Licensing is crucial for open-source projects, as it guarantees that the software remains available under the conditions specified by the author. + +This project employs the Creative Commons Attribution 4.0 International license, which can be found in the LICENSE file within the project's repository. + +Licensing defines the rights granted to you as an author by the copyright holder. It is essential for contributors to fully understand and accept these licensing rights. In some cases, the copyright holder may not be the contributor, such as when the contributor is working on behalf of a company. + +# Developer Certificate of Origin (DCO) +To uphold licensing criteria and demonstrate good faith, this project mandates adherence to the Developer Certificate of Origin (DCO) process. + +The DCO is an attestation appended to every contribution from each author. In the commit message of the contribution (explained in greater detail later in this document), the author adds a Signed-off-by statement, thereby accepting the DCO. + +When an author submits a patch, they affirm that they possess the right to submit the patch under the designated license. The DCO agreement is displayed below and at https://developercertificate.org. + + +Developer's Certificate of Origin 1.1 + +By making a contribution to this project, I certify that: + +(a) The contribution was created in whole or in part by me and I + have the right to submit it under the open source license + indicated in the file; or + +(b) The contribution is based upon previous work that, to the best + of my knowledge, is covered under an appropriate open source + license and I have the right under that license to submit that + work with modifications, whether created in whole or in part + by me, under the same open source license (unless I am + permitted to submit under a different license), as indicated + in the file; or + +(c) The contribution was provided directly to me by some other + person who certified (a), (b), or (c), and I have not modified + it. + +(d) I understand and agree that this project and the contribution + are public and that a record of the contribution (including all + personal information I submit with it, including my sign-off) is + maintained indefinitely and may be redistributed consistent with + this project or the open source license(s) involved. + +# DCO Sign-Off Methods +The DCO necessitates the inclusion of a sign-off message in the following format for each commit within the pull request: + +Signed-off-by: Stephano Cetola + +Please use your real name in the sign-off message. + +You can manually add the DCO text to your commit body or include either -s or --signoff in your standard Git commit commands. If you forget to incorporate the sign-off, you can also amend a previous commit with the sign-off by executing git commit --amend -s. If you have already pushed your changes to GitHub, you will need to force push your branch afterward using git push -f. + +Note: + +Ensure that the name and email address associated with your GitHub account match the name and email address in the Signed-off-by line of your commit message. From e648d05ab2534005708f8ccb2d87c4d3b15370eb Mon Sep 17 00:00:00 2001 From: xuezheng Date: Wed, 14 Aug 2024 08:34:08 +0800 Subject: [PATCH 114/151] Fix matmul example in the docs. --- doc/rvv-intrinsic-examples.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/rvv-intrinsic-examples.adoc b/doc/rvv-intrinsic-examples.adoc index 6de62ca69..0d16c4f0c 100644 --- a/doc/rvv-intrinsic-examples.adoc +++ b/doc/rvv-intrinsic-examples.adoc @@ -103,7 +103,7 @@ void matmul_rvv(double *a, double *b, double *c, int n, int m, int p) { // Set accumulator to zero. vfloat64m1_t vec_s = __riscv_vfmv_v_f_f64m1(0.0, vlmax); vfloat64m1_t vec_zero = __riscv_vfmv_v_f_f64m1(0.0, vlmax); - for (size_t vl; k > 0; k -= vl) { + for (size_t vl; k > 0; k -= vl, ptr_a += vl, ptr_b += vl * m) { vl = __riscv_vsetvl_e64m1(k); // Load row a[i][k..k+vl) From 6cfdf5f72f3fc0d67027f43ba9f90827139564fe Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Mon, 9 Sep 2024 14:57:16 +0800 Subject: [PATCH 115/151] Bump actions/upload-artifact to v4 --- .github/workflows/build-pdf.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/build-pdf.yml b/.github/workflows/build-pdf.yml index bde60768f..117b6f35e 100644 --- a/.github/workflows/build-pdf.yml +++ b/.github/workflows/build-pdf.yml @@ -12,7 +12,7 @@ jobs: build: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v2 + - uses: actions/checkout@v4 with: submodules: recursive - name: Install packages From 0686039fe3075cdd7750d571c5c7fa851b390658 Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Mon, 9 Sep 2024 14:57:16 +0800 Subject: [PATCH 116/151] Bump actions/upload-artifact to v4 --- .github/workflows/build-pdf.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/build-pdf.yml b/.github/workflows/build-pdf.yml index 117b6f35e..b0a75da6f 100644 --- a/.github/workflows/build-pdf.yml +++ b/.github/workflows/build-pdf.yml @@ -36,7 +36,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Download artifact - uses: actions/download-artifact@v2 + uses: actions/download-artifact@v4 with: name: v-intrinsic-spec.pdf path: ./doc/ From 9e4fae9c12570eebae86fd0739fd9f774cd1b5cd Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 23 Aug 2024 00:59:03 -0700 Subject: [PATCH 117/151] enum: store required extension info - Add a new field to store required extensions into a list - Also added API functions to access/add/remove required extensions Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/enums.py | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/enums.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/enums.py index d0ade6014..9eab7be36 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/enums.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/enums.py @@ -145,7 +145,8 @@ def __init__(self, inst_type=InstType.UNKNOWN, mem_type=MemType.NO_MEM, extra_attr=ExtraAttr.NO_ATTR, - NF=1): + NF=1, + required_ext=None): #pylint: disable=invalid-name self.SEW = SEW self.LMUL = LMUL @@ -154,6 +155,9 @@ def __init__(self, self.mem_type = mem_type self.extra_attr = extra_attr self.NF = NF + if required_ext is None: + required_ext = [] + self.required_ext = sorted(required_ext) def load_p(self): return self.mem_type == MemType.LOAD @@ -185,3 +189,14 @@ def get(args, # For mask operation return InstInfo(0, 0, args["OP"], inst_type, mem_type, extra_attr | decorator.flags) + + def get_required_exts(self) -> list: + return sorted(self.required_ext) + + def add_required_ext(self, ext: str) -> None: + if ext not in self.required_ext: + self.required_ext.append(ext) + + def remove_required_ext(self, ext: str) -> None: + if ext in self.required_ext: + self.required_ext.remove(ext) From ec75ba4140e4394710bd91f29b8cce6cb63c8f4d Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 30 Aug 2024 01:59:57 -0700 Subject: [PATCH 118/151] enum: construct InstInfo with required extensions Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/enums.py | 44 +++++++++++++++---- 1 file changed, 35 insertions(+), 9 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/enums.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/enums.py index 9eab7be36..bedc390f4 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/enums.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/enums.py @@ -173,22 +173,48 @@ def get(args, decorator, inst_type, mem_type=MemType.NO_MEM, - extra_attr=ExtraAttr.NO_ATTR): + extra_attr=ExtraAttr.NO_ATTR, + required_ext=None): if decorator is None: # vsetvl and vsetvlmax - return InstInfo(args["SEW"], args["LMUL"], args["OP"], inst_type, - mem_type, extra_attr) + return InstInfo( + args["SEW"], + args["LMUL"], + args["OP"], + inst_type, + mem_type, + extra_attr, + required_ext=required_ext) elif "SEW" in args: if "NF" in args: - return InstInfo(args["SEW"], args["LMUL"], args["OP"], inst_type, - mem_type, extra_attr | decorator.flags, args["NF"]) + return InstInfo( + args["SEW"], + args["LMUL"], + args["OP"], + inst_type, + mem_type, + extra_attr | decorator.flags, + args["NF"], + required_ext=required_ext) else: - return InstInfo(args["SEW"], args["LMUL"], args["OP"], inst_type, - mem_type, extra_attr | decorator.flags) + return InstInfo( + args["SEW"], + args["LMUL"], + args["OP"], + inst_type, + mem_type, + extra_attr | decorator.flags, + required_ext=required_ext) else: # For mask operation - return InstInfo(0, 0, args["OP"], inst_type, mem_type, - extra_attr | decorator.flags) + return InstInfo( + 0, + 0, + args["OP"], + inst_type, + mem_type, + extra_attr | decorator.flags, + required_ext=required_ext) def get_required_exts(self) -> list: return sorted(self.required_ext) From 8ff77279764d3f6ea5852b71355a55edb3531e4a Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 00:11:45 -0700 Subject: [PATCH 119/151] generator: add support to store required extension list Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/generator.py | 36 ++++++++++++------- 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 8fbadc8e3..7335a7d88 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -64,7 +64,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=None): + description=None, + required_ext_list=None): # pylint: disable=unused-argument # NOTE: 'title' and 'link' are only used in DocGenerator and # OverloadedDocGenerator. Probably need some decoupling here. @@ -75,7 +76,8 @@ def function_group(self, sew_list=sew_list, lmul_list=lmul_list, decorator_list=decorator_list, - description=description) + description=description, + required_ext_list=required_ext_list) def start_group(self, group_name): raise NotImplementedError @@ -359,7 +361,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=None): + description=None, + required_ext_list=None): self.write_title(title, link) if self.has_tail_policy and len(decorator_list) == 0: s = "Intrinsics here don't have a policy variant.\n" @@ -378,7 +381,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=description) + description=description, + required_ext_list=required_ext_list) def func(self, inst_info, name, return_type, **kwargs): name = Generator.func_name(name) @@ -437,7 +441,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=None): + description=None, + required_ext_list=None): self.do_not_have_overloaded_variant = True for op in op_list: if Generator.is_support_overloaded(op): @@ -451,7 +456,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=description) + description=description, + required_ext_list=required_ext_list) def func(self, inst_info, name, return_type, **kwargs): func_name = Generator.func_name(name) @@ -714,7 +720,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=None): + description=None, + required_ext_list=None): self.test_file_names = op_list template.render( G=self, @@ -723,7 +730,8 @@ def function_group(self, sew_list=sew_list, lmul_list=lmul_list, decorator_list=decorator_list, - description=description) + description=description, + required_ext_list=required_ext_list) class Grouper(Generator): @@ -778,7 +786,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=None): + description=None, + required_ext_list=None): self.op_list = op_list self.groups[self.current_group].append(title) self.current_sub_group = title @@ -789,7 +798,8 @@ def function_group(self, sew_list=sew_list, lmul_list=lmul_list, decorator_list=decorator_list, - description=description) + description=description, + required_ext_list=required_ext_list) class CompatibleHeaderGenerator(Generator): @@ -936,7 +946,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=None): + description=None, + required_ext_list=None): if self.has_tail_policy and len(decorator_list) == 0: return super().function_group( @@ -948,7 +959,8 @@ def function_group(self, sew_list, lmul_list, decorator_list, - description=description) + description=description, + required_ext_list=required_ext_list) @staticmethod def is_policy_func(inst_info): From 4df1b974dc07086caff05c1468ce1daaa27c95e7 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 23 Aug 2024 01:01:02 -0700 Subject: [PATCH 120/151] vector-crypto: add required extension information Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/vector_crypto_inst.py | 56 +++++++++++++------ 1 file changed, 38 insertions(+), 18 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py index 7635912e1..338bc0dbf 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/vector_crypto_inst.py @@ -20,7 +20,8 @@ def gen(g): UITYPE, SEWS, LMULS, - decorators.has_masking_maskedoff_policy) + decorators.has_masking_maskedoff_policy, + required_ext_list=["zvbb"]) g.function_group( vector_crypto_template, @@ -30,7 +31,8 @@ def gen(g): UITYPE, SEWS, LMULS, - decorators.has_masking_maskedoff_policy) + decorators.has_masking_maskedoff_policy, + required_ext_list=["zvbb"]) g.function_group( vector_crypto_template, @@ -40,7 +42,8 @@ def gen(g): UITYPE, SEWS, LMULS, - decorators.has_masking_maskedoff_policy) + decorators.has_masking_maskedoff_policy, + required_ext_list=["zvbb"]) g.function_group( vector_crypto_template, @@ -50,7 +53,8 @@ def gen(g): UITYPE, SEWS, LMULS, - decorators.has_masking_maskedoff_policy) + decorators.has_masking_maskedoff_policy, + required_ext_list=["zvbb"]) g.function_group( vector_crypto_template, @@ -60,7 +64,8 @@ def gen(g): UITYPE, SEWS, LMULS, - decorators.has_masking_maskedoff_policy) + decorators.has_masking_maskedoff_policy, + required_ext_list=["zvkb"]) g.function_group( vector_crypto_template, @@ -70,7 +75,8 @@ def gen(g): UITYPE, WSEWS, WLMULS, - decorators.has_masking_maskedoff_policy) + decorators.has_masking_maskedoff_policy, + required_ext_list=["zvbb"]) #################################################################### @@ -84,7 +90,8 @@ def gen(g): UITYPE, [64], LMULS, - decorators.has_masking_maskedoff_policy) + decorators.has_masking_maskedoff_policy, + required_ext_list=["zvbc"]) #################################################################### @@ -98,7 +105,8 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvkg"]) #################################################################### @@ -112,7 +120,8 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvkned"]) g.function_group( vector_crypto_template, @@ -122,7 +131,8 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvkned"]) g.function_group( vector_crypto_template, @@ -132,7 +142,8 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvkned"]) g.function_group( vector_crypto_template, @@ -142,12 +153,15 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvkned"]) #################################################################### g.start_group("Zvknh - NIST Suite: Vector SHA-2 Secure Hash") + # We need extra condition to check if zvknhb is required + # If SEW=64, then zvknhb is required g.function_group( vector_crypto_template, "Vector SHA-2 message schedule", @@ -156,7 +170,8 @@ def gen(g): UITYPE, [32, 64], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvknha"]) g.function_group( vector_crypto_template, @@ -166,7 +181,8 @@ def gen(g): UITYPE, [32, 64], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvknha"]) #################################################################### @@ -180,7 +196,8 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvksed"]) g.function_group( vector_crypto_template, @@ -190,7 +207,8 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvksed"]) #################################################################### @@ -204,7 +222,8 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvksh"]) g.function_group( vector_crypto_template, @@ -214,7 +233,8 @@ def gen(g): UITYPE, [32], LMULS, - decorators.has_no_masking_policy) + decorators.has_no_masking_policy, + required_ext_list=["zvksh"]) #################################################################### From 97237c98ad6ab03d6618a7739f042461b412d453 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 00:12:55 -0700 Subject: [PATCH 121/151] vector-crypto-template: store required extension list info and handle Zvknhb Signed-off-by: Jerry Zhang Jian --- .../templates/vector_crypto_template.py | 85 ++++++++++++++++--- 1 file changed, 71 insertions(+), 14 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py index 429269e29..a655096ee 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/vector_crypto_template.py @@ -74,8 +74,14 @@ def has_rs1_input(name): return name in has_rs1_input_inst_set -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -89,24 +95,48 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, for operand_mnemonic in operand_mnemonic_dict[op]: if operand_mnemonic in ("vv", "vs"): if op == "vwsll": - inst_info = InstInfo.get(args, decorator, InstType.WVV, - ExtraAttr.NO_ATTR) + inst_info = InstInfo.get( + args, + decorator, + InstType.WVV, + ExtraAttr.NO_ATTR, + required_ext=required_ext_list) else: - inst_info = InstInfo.get(args, decorator, InstType.VV, - ExtraAttr.NO_ATTR) + inst_info = InstInfo.get( + args, + decorator, + InstType.VV, + ExtraAttr.NO_ATTR, + required_ext=required_ext_list) elif operand_mnemonic == "vx": if op == "vwsll": - inst_info = InstInfo.get(args, decorator, InstType.WVX, - ExtraAttr.NO_ATTR) + inst_info = InstInfo.get( + args, + decorator, + InstType.WVX, + ExtraAttr.NO_ATTR, + required_ext=required_ext_list) else: - inst_info = InstInfo.get(args, decorator, InstType.VX, - ExtraAttr.NO_ATTR) + inst_info = InstInfo.get( + args, + decorator, + InstType.VX, + ExtraAttr.NO_ATTR, + required_ext=required_ext_list) elif operand_mnemonic == "vi": - inst_info = InstInfo.get(args, decorator, InstType.VI, - ExtraAttr.NO_ATTR) + inst_info = InstInfo.get( + args, + decorator, + InstType.VI, + ExtraAttr.NO_ATTR, + required_ext=required_ext_list) elif operand_mnemonic == "v": - inst_info = InstInfo.get(args, decorator, InstType.V, - ExtraAttr.NO_ATTR) + inst_info = InstInfo.get( + args, + decorator, + InstType.V, + ExtraAttr.NO_ATTR, + required_ext=required_ext_list) else: assert False, "Unreachable, unrecognized mnemonic" @@ -151,6 +181,33 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, kwargs["vl"] = type_helper.size_t + lmul_num = 2**(lmul_list.index(args["LMUL"]) - 3) + if int(args["SEW"] / lmul_num) == 64: + inst_info.add_required_ext("zve64x") + else: + inst_info.add_required_ext("zve32x") + # Add Zvl constraint + # If zvkg, zvkned, zvknha, zvknhb, zvksed, zvksh in required_ext_list, + # then add Zvl constraint by checking if LMUL * VLEN >= EGW + if any(ext in inst_info.get_required_exts() for ext in + ["zvkg", "zvkned", "zvknha", "zvknhb", "zvksed", "zvksh"]): + # EGW = EGS * EEW(SEW) + # For SM3 instruction group (Zvksh), EGS = 8, otherwise EGS = 4 + if op in ["vsm3me", "vsm3c"]: + EGW = int(8 * args["SEW"]) + else: + EGW = int(4 * args["SEW"]) + required_VLEN = int(EGW / lmul_num) + if required_VLEN >= 32: + inst_info.add_required_ext(f"zvl{int(EGW / lmul_num)}b") + # If SEW == 64, zvknhb is required. + # Zvknhb also requires zve64x + # Note that zvknhb is mutually exclusive with zvknha + if op in ["vsha2ms", "vsha2ch", "vsha2cl"] and args["SEW"] == 64: + inst_info.remove_required_ext("zvknha") + inst_info.add_required_ext("zvknhb") + inst_info.add_required_ext("zve64x") + if operand_mnemonic == "vs": starting_from_lmul_index = lmul_list.index(args["LMUL"]) # print(starting_from_lmul_index) From df19cc096d81b5a2bcdcb3b1439198143fde0860 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 00:16:14 -0700 Subject: [PATCH 122/151] generator: dynamically generate LLVM API test header Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/generator.py | 26 ++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 7335a7d88..835eb8a7b 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -510,8 +510,18 @@ def inst_group_prologue(self): def inst_group_epilogue(self): return "" - def write_file_header(self, has_float_type, has_bfloat16_type, name): + def write_file_header(self, has_float_type, has_bfloat16_type, requires_exts): #pylint: disable=line-too-long + dynamic_llvm_header_prologue = r"""// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +""" + + dynamic_llvm_header_epilogue = r"""// RUN: -target-feature +experimental \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +""" + int_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ @@ -540,6 +550,13 @@ def write_file_header(self, has_float_type, has_bfloat16_type, name): """) + # Dynamic header is used when the requires_exts is not empty. + if requires_exts: + dynamic_llvm_header = dynamic_llvm_header_prologue + for ext in requires_exts: + dynamic_llvm_header += f"// RUN: -target-feature +{ext} \\\n" + dynamic_llvm_header += dynamic_llvm_header_epilogue + vector_crypto_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ // RUN: -target-feature +zvbb \ @@ -567,7 +584,9 @@ def is_vector_crypto_inst(name): return False if self.toolchain_type == ToolChainType.LLVM: - if has_bfloat16_type: + if requires_exts: + self.fd.write(dynamic_llvm_header) + elif has_bfloat16_type: self.fd.write(bfloat16_llvm_header) elif is_vector_crypto_inst(name): self.fd.write(vector_crypto_llvm_header) @@ -651,7 +670,8 @@ def func(self, inst_info, name, return_type, **kwargs): has_float_type = True if header: - self.write_file_header(has_float_type, has_bfloat16_type, name) + self.write_file_header(has_float_type, has_bfloat16_type, + inst_info.get_required_exts()) def output_call_arg(arg_name, type_name): if ((name.startswith("vget") or name.startswith("vset")) \ From f62239f7b8eca16f4c66f08dcfa641098e545623 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 00:42:05 -0700 Subject: [PATCH 123/151] generator: use dynamic_llvm_header for vector crypto intrinsics Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/generator.py | 28 ------------------- 1 file changed, 28 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 835eb8a7b..2aea549df 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -557,39 +557,11 @@ def write_file_header(self, has_float_type, has_bfloat16_type, requires_exts): dynamic_llvm_header += f"// RUN: -target-feature +{ext} \\\n" dynamic_llvm_header += dynamic_llvm_header_epilogue - vector_crypto_llvm_header = r"""// REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ -// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ -// RUN: FileCheck --check-prefix=CHECK-RV64 %s - -""" - - def is_vector_crypto_inst(name): - vector_crypto_inst = [ - "vandn", "vbrev", "vbrev8", "vrev8", "vclz", "vctz", "vrol", "vror", - "vwsll", "vclmul", "vclmulh", "vghsh", "vgmul", "vaesef", "vaesem", - "vaesdf", "vaesdm", "vaeskf1", "vaeskf2", "vaesz", "vsha2ms", - "vsha2ch", "vsha2cl", "vsm4k", "vsm4r", "vsm3me", "vsm3c" - ] - for inst in vector_crypto_inst: - if inst in name: - return True - return False - if self.toolchain_type == ToolChainType.LLVM: if requires_exts: self.fd.write(dynamic_llvm_header) elif has_bfloat16_type: self.fd.write(bfloat16_llvm_header) - elif is_vector_crypto_inst(name): - self.fd.write(vector_crypto_llvm_header) elif has_float_type: self.fd.write(float_llvm_header) else: From 822458d2f48dfff0fca9a991a60edab5cf8cb310 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 28 Aug 2024 01:33:15 -0700 Subject: [PATCH 124/151] template: add required_ext_list into render API Signed-off-by: Jerry Zhang Jian --- .../templates/binary_intcarry_template.py | 28 ++++++++---- .../templates/binary_nop_template.py | 13 ++++-- .../templates/binary_op_template.py | 30 +++++++++---- .../templates/binary_wop_template.py | 28 ++++++++---- .../templates/cmp_template.py | 13 ++++-- .../templates/cvt_op_template.py | 25 ++++++++--- .../get_set_diff_lmul_op_template.py | 28 +++++++++--- .../templates/load_template.py | 13 ++++-- .../templates/mac_template.py | 28 +++++++++--- .../templates/mask_load_store_template.py | 13 ++++-- .../templates/mask_template.py | 22 +++++++--- .../templates/misc_op_template.py | 26 ++++++++--- .../templates/permute_template.py | 29 ++++++++---- .../templates/reduction_template.py | 16 +++++-- .../templates/reint_op_template.py | 22 +++++++--- .../templates/seg_load_template.py | 17 +++++-- .../templates/seg_store_template.py | 17 +++++-- .../templates/setvl_template.py | 16 +++++-- .../templates/store_template.py | 17 +++++-- .../templates/unary_op_template.py | 44 +++++++++++++++---- 20 files changed, 342 insertions(+), 103 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py index 12143face..5bef39625 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_intcarry_template.py @@ -26,8 +26,14 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -43,8 +49,10 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, assert args["OP"] is not None args["OP"] = "v" + args["OP"] - inst_info_vvm = InstInfo.get(args, decorator, InstType.VVVM) - inst_info_vxm = InstInfo.get(args, decorator, InstType.VVXM) + inst_info_vvm = InstInfo.get( + args, decorator, InstType.VVVM, required_ext=required_ext_list) + inst_info_vxm = InstInfo.get( + args, decorator, InstType.VVXM, required_ext=required_ext_list) if not "m" in args["OP"]: G.func( @@ -77,11 +85,15 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, assert args["OP"] is not None args["OP"] = "v" + args["OP"] - inst_info_vvm = InstInfo.get(args, None, InstType.VVVM) - inst_info_vxm = InstInfo.get(args, None, InstType.VVXM) + inst_info_vvm = InstInfo.get( + args, None, InstType.VVVM, required_ext=required_ext_list) + inst_info_vxm = InstInfo.get( + args, None, InstType.VVXM, required_ext=required_ext_list) - inst_info_vv = InstInfo.get(args, None, InstType.VVV) - inst_info_vx = InstInfo.get(args, None, InstType.VVX) + inst_info_vv = InstInfo.get( + args, None, InstType.VVV, required_ext=required_ext_list) + inst_info_vx = InstInfo.get( + args, None, InstType.VVX, required_ext=required_ext_list) # madc or msbc if "m" in args["OP"]: diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py index 1ca61715e..c8162a00c 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_nop_template.py @@ -32,8 +32,14 @@ def must_int_type(**kargs): # narrowing op template -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -74,7 +80,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, continue type_helper = TypeHelper(**args) - inst_info = InstInfo.get(args, decorator, inst_type) + inst_info = InstInfo.get( + args, decorator, inst_type, required_ext=required_ext_list) if op in ["nsrl", "nsra", "nclip"]: if op2 == "v": diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py index 0f356f962..a72322536 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_op_template.py @@ -28,8 +28,14 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -95,10 +101,14 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, args["OP"] = "v" + args["OP"] - inst_info_vv = InstInfo.get(args, decorator, InstType.VVV) - inst_info_vx = InstInfo.get(args, decorator, InstType.VVX) - inst_info_vf = InstInfo.get(args, decorator, InstType.VVF) - inst_info_v = InstInfo.get(args, decorator, InstType.VV) + inst_info_vv = InstInfo.get( + args, decorator, InstType.VVV, required_ext=required_ext_list) + inst_info_vx = InstInfo.get( + args, decorator, InstType.VVX, required_ext=required_ext_list) + inst_info_vf = InstInfo.get( + args, decorator, InstType.VVF, required_ext=required_ext_list) + inst_info_v = InstInfo.get( + args, decorator, InstType.VV, required_ext=required_ext_list) if args["OP2"] == "v": inst_info = inst_info_vv elif args["OP2"] == "x": @@ -147,7 +157,9 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, elif "rgather" == op: if op2 == "v": G.func( - InstInfo.get(args, decorator, InstType.VVV), + InstInfo.get( + args, decorator, InstType.VVV, + required_ext=required_ext_list), name="{OP}_v{OP2}_{TYPE}{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.v, @@ -158,7 +170,9 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, vl=type_helper.size_t) else: # vx G.func( - InstInfo.get(args, decorator, InstType.VVV), + InstInfo.get( + args, decorator, InstType.VVV, + required_ext=required_ext_list), name="{OP}_v{OP2}_{TYPE}{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.v, diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py index 3e68d3c8c..9bd5036ee 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/binary_wop_template.py @@ -26,8 +26,14 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -49,12 +55,18 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, args["OP"] = "v" + args["OP"] - inst_info_wvv = InstInfo.get(args, decorator, InstType.WVV) - inst_info_wvx = InstInfo.get(args, decorator, InstType.WVX) - inst_info_wvf = InstInfo.get(args, decorator, InstType.WVF) - inst_info_wwv = InstInfo.get(args, decorator, InstType.WWV) - inst_info_wwx = InstInfo.get(args, decorator, InstType.WWX) - inst_info_wwf = InstInfo.get(args, decorator, InstType.WWF) + inst_info_wvv = InstInfo.get( + args, decorator, InstType.WVV, required_ext=required_ext_list) + inst_info_wvx = InstInfo.get( + args, decorator, InstType.WVX, required_ext=required_ext_list) + inst_info_wvf = InstInfo.get( + args, decorator, InstType.WVF, required_ext=required_ext_list) + inst_info_wwv = InstInfo.get( + args, decorator, InstType.WWV, required_ext=required_ext_list) + inst_info_wwx = InstInfo.get( + args, decorator, InstType.WWX, required_ext=required_ext_list) + inst_info_wwf = InstInfo.get( + args, decorator, InstType.WWF, required_ext=required_ext_list) args["LMUL"] = args["WLMUL"] args["SEW"] = args["WSEW"] diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py index 61361648f..0ad6a483b 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cmp_template.py @@ -26,8 +26,14 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -67,7 +73,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, op = op + "u" args["OP"] = "v" + op - inst_info = InstInfo.get(args, decorator, inst_type) + inst_info = InstInfo.get( + args, decorator, inst_type, required_ext=required_ext_list) if op2 == "v": G.func( inst_info, diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py index eee862dd6..50356d520 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/cvt_op_template.py @@ -28,8 +28,14 @@ from constants import ITYPES -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'type_list' is unused but required for interface @@ -112,7 +118,11 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, extra_attr = ExtraAttr.CONVERT inst_info = InstInfo.get( - args, decorator, InstType.VV, extra_attr=extra_attr) + args, + decorator, + InstType.VV, + extra_attr=extra_attr, + required_ext=required_ext_list) args["TYPE"] = args["TYPES2"] src_type_helper = TypeHelper(**args) @@ -159,7 +169,11 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if args["TYPES1"] != args["TYPES3"] and args["TYPES3"] == "f": args["OP"] = args["OP"] + "_rtz" inst_info = InstInfo.get( - args, decorator, InstType.VV, extra_attr=extra_attr) + args, + decorator, + InstType.VV, + extra_attr=extra_attr, + required_ext=required_ext_list) func_name =\ "{OP}_{TYPES1}_{TYPES3}_{MIDDLE}_{D_TYPE}{LSEW}m{LLMUL}".format_map\ (args) @@ -175,7 +189,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if op == "ncvt" and args["TYPES1"] == "f" and args["TYPES3"] == "f": args["OP"] = args["OP"] + "_rod" inst_info = \ - InstInfo.get(args, decorator, InstType.VV, extra_attr=extra_attr) + InstInfo.get(args, decorator, InstType.VV, extra_attr=extra_attr, + required_ext = required_ext_list) func_name = \ "{OP}_{TYPES1}_{TYPES3}_{MIDDLE}_{D_TYPE}{LSEW}m{LLMUL}".format_map\ (args) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py index 06e980d0d..9eda7a796 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/get_set_diff_lmul_op_template.py @@ -50,8 +50,14 @@ def vset_constraint(**kargs): and int(kargs["LMUL"]) > int(kargs["SRC_LMUL"]) -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -84,14 +90,16 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, args) if vget: G.func( - InstInfo.get(args, decorator, InstType.VGET), + InstInfo.get( + args, decorator, InstType.VGET, required_ext=required_ext_list), name=func_name, return_type=type_helper.v, src=src_type, index=type_helper.size_t) else: G.func( - InstInfo.get(args, decorator, InstType.VSET), + InstInfo.get( + args, decorator, InstType.VSET, required_ext=required_ext_list), name=func_name, return_type=type_helper.v, dest=type_helper.v, @@ -115,7 +123,11 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, func_name = "{OP}_v_{TYPE}{SEW}m{LMUL}x{NF}_{TYPE}{SEW}m{LMUL}".\ format_map(args) G.func( - InstInfo.get(args, decorator, InstType.VGET), + InstInfo.get( + args, + decorator, + InstType.VGET, + required_ext=required_ext_list), name=func_name, return_type=vector_type, src=tuple_type, @@ -124,7 +136,11 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, func_name = "{OP}_v_{TYPE}{SEW}m{LMUL}_{TYPE}{SEW}m{LMUL}x{NF}".\ format_map(args) G.func( - InstInfo.get(args, decorator, InstType.VSET), + InstInfo.get( + args, + decorator, + InstType.VSET, + required_ext=required_ext_list), name=func_name, return_type=tuple_type, dest=tuple_type, diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py index 574e0d882..f8573cf26 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/load_template.py @@ -29,8 +29,14 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -70,7 +76,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if op not in ["vloxei", "vluxei"] and sew != eew: continue inst_info =\ - InstInfo.get(args, decorator, inst_type, MemType.LOAD, extra_attr) + InstInfo.get(args, decorator, inst_type, MemType.LOAD, extra_attr, + required_ext = required_ext_list) G.func( inst_info, name=\ diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py index 490695b67..68888a47e 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mac_template.py @@ -27,8 +27,14 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -58,11 +64,23 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, args["OP"] = "v" + args["OP"] inst_info_vs = InstInfo.get( - args, decorator, inst_type, extra_attr=ExtraAttr.MAC) + args, + decorator, + inst_type, + extra_attr=ExtraAttr.MAC, + required_ext=required_ext_list) inst_info_vv = InstInfo.get( - args, decorator, InstType.VVV, extra_attr=ExtraAttr.MAC) + args, + decorator, + InstType.VVV, + extra_attr=ExtraAttr.MAC, + required_ext=required_ext_list) inst_info_vx = InstInfo.get( - args, decorator, InstType.VVX, extra_attr=ExtraAttr.MAC) + args, + decorator, + InstType.VVX, + extra_attr=ExtraAttr.MAC, + required_ext=required_ext_list) type_helper = TypeHelper(**args) if (("maccsu" in op) or ("maccus" in op)) and data_type == "uint": diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py index 7981a07e3..9383305ef 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_load_store_template.py @@ -26,8 +26,14 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'lmul_list' is unused but required for interface @@ -43,7 +49,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, load_p = op == "vlm" - inst_info = InstInfo.get(args, decorator, InstType.V) + inst_info = InstInfo.get( + args, decorator, InstType.V, required_ext=required_ext_list) if load_p: base_type = "const uint8_t *" diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py index 8baed1b5a..743e0fd4a 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/mask_template.py @@ -25,8 +25,14 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -46,8 +52,10 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, args["OP"] = "v" + args["OP"] - inst_info_mm = InstInfo.get(args, decorator, InstType.MMM) - inst_info_m = InstInfo.get(args, decorator, InstType.MM) + inst_info_mm = InstInfo.get( + args, decorator, InstType.MMM, required_ext=required_ext_list) + inst_info_m = InstInfo.get( + args, decorator, InstType.MM, required_ext=required_ext_list) if op in ["mv", "not"]: # unary operator G.func( @@ -105,7 +113,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if op == "iota": G.func( - InstInfo.get(args, decorator, InstType.MM), + InstInfo.get(args, decorator, InstType.MM, + required_ext = required_ext_list), name=\ "viota_m_u{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.uiv, @@ -115,7 +124,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, vl=type_helper.size_t) if op == "id": G.func( - InstInfo.get(args, decorator, InstType.VM), + InstInfo.get( + args, decorator, InstType.VM, required_ext=required_ext_list), name="vid_v_u{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.uiv, **decorator.mask_args(type_helper.m, type_helper.uiv), diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py index fd36bc3d3..c8540467b 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/misc_op_template.py @@ -30,8 +30,14 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -53,7 +59,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if args["OP"] == "vundefined": inst_type = InstType.VUNDEF G.func( - InstInfo.get(args, decorator, inst_type), + InstInfo.get( + args, decorator, inst_type, required_ext=required_ext_list), name="{OP}_{TYPE}{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.v) @@ -84,7 +91,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if args["OP"] == "vundefined": inst_type = InstType.VUNDEF G.func( - InstInfo.get(args, decorator, inst_type), + InstInfo.get( + args, decorator, inst_type, required_ext=required_ext_list), name="{OP}_{TYPE}{SEW}m{LMUL}x{NF}".format_map(args) + decorator.func_suffix, return_type=type_helper.tuple_v) @@ -111,7 +119,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if get_float_lmul(src_lmul) >= get_float_lmul(dst_lmul): continue type_helper = TypeHelper(**args) - inst_info = InstInfo.get(args, decorator, inst_type) + inst_info = InstInfo.get( + args, decorator, inst_type, required_ext=required_ext_list) if args["TYPE"] == "bfloat": args["TYPE1"] = args["TYPE"][0:2] else: @@ -143,7 +152,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, DST_LMUL=lmul_list): type_helper = TypeHelper(**args) - inst_info = InstInfo.get(args, decorator, InstType.VCREATE) + inst_info = InstInfo.get( + args, decorator, InstType.VCREATE, required_ext=required_ext_list) func_name = "{OP}_v_{TYPE}{SEW}m{LMUL}_{TYPE}{SEW}m{DST_LMUL}".format_map( args) @@ -192,7 +202,9 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, args_for_vcreate[arg_name] = type_helper.v G.func( - InstInfo.get(args, decorator, InstType.VCREATE), + InstInfo.get( + args, decorator, InstType.VCREATE, + required_ext=required_ext_list), name="{OP}_v_{TYPE}{SEW}m{LMUL}x{NF}".format_map(args), return_type=type_helper.tuple_v, **args_for_vcreate) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py index ca0de2f30..4effaa4f3 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/permute_template.py @@ -26,8 +26,14 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -59,13 +65,16 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if op == "mv": if decorator.func_suffix == "": G.func( - InstInfo.get(args, decorator, sv_inst_type), + InstInfo.get( + args, decorator, sv_inst_type, + required_ext=required_ext_list), name="{OP}_{S_TYPE}_s_{TYPE}{SEW}m{LMUL}_{TYPE}{SEW}".format_map( args), return_type=type_helper.s, vs1=type_helper.v) G.func( - InstInfo.get(args, decorator, vs_inst_type), + InstInfo.get( + args, decorator, vs_inst_type, required_ext=required_ext_list), name="{OP}_s_{S_TYPE}_{TYPE}{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.v, @@ -74,7 +83,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, vl=type_helper.size_t) elif op in ["slide1up", "slide1down"]: G.func( - InstInfo.get(args, decorator, vvs_inst_type), + InstInfo.get( + args, decorator, vvs_inst_type, required_ext=required_ext_list), name="{OP}_v{S_TYPE}_{TYPE}{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.v, @@ -85,7 +95,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, vl=type_helper.size_t) elif op == "slideup": G.func( - InstInfo.get(args, decorator, InstType.VVX), + InstInfo.get( + args, decorator, InstType.VVX, required_ext=required_ext_list), name="{OP}_v{S_TYPE}_{TYPE}{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.v, @@ -96,7 +107,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, vl=type_helper.size_t) elif op == "slidedown": G.func( - InstInfo.get(args, decorator, InstType.VVX), + InstInfo.get( + args, decorator, InstType.VVX, required_ext=required_ext_list), name="{OP}_v{S_TYPE}_{TYPE}{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.v, @@ -107,7 +119,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, vl=type_helper.size_t) elif op == "compress": G.func( - InstInfo.get(args, decorator, InstType.VVV), + InstInfo.get( + args, decorator, InstType.VVV, required_ext=required_ext_list), name="{OP}_vm_{TYPE}{SEW}m{LMUL}".format_map(args) + decorator.func_suffix, return_type=type_helper.v, diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py index 3f61bf497..ed79bb35f 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reduction_template.py @@ -27,8 +27,14 @@ from enums import ExtraAttr -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -66,7 +72,11 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, args["OP"] = "v" + args["OP"] inst_info = InstInfo.get( - args, decorator, inst_type, extra_attr=ExtraAttr.REDUCE) + args, + decorator, + inst_type, + extra_attr=ExtraAttr.REDUCE, + required_ext=required_ext_list) if (data_type == "float" and op in ["redosum","redusum","redmax","redmin","wredosum","wredusum"])\ or ("int" in data_type): diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py index a2e653880..739102255 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/reint_op_template.py @@ -27,8 +27,14 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -74,7 +80,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, "{OP}_v_{TYPES3}{SEW}m{LMUL}_{TYPES1}{SEW}m{LMUL}".format_map(args) src_type = "v{TYPES2}{SEW}m{LMUL}_t".format_map(args) G.func( - InstInfo.get(args, decorator, InstType.REINT), + InstInfo.get( + args, decorator, InstType.REINT, required_ext=required_ext_list), name=func_name + decorator.func_suffix, return_type=rt, **decorator.mask_args(type_helper.m, rt), @@ -114,7 +121,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, "{OP}_v_{TYPES3}{SEW}m{LMUL}_{TYPES1}{DST_SEW}m{LMUL}".format_map(args) src_type = "v{TYPES2}{SEW}m{LMUL}_t".format_map(args) G.func( - InstInfo.get(args, decorator, InstType.REINT), + InstInfo.get( + args, decorator, InstType.REINT, required_ext=required_ext_list), name=func_name + decorator.func_suffix, return_type=rt, **decorator.mask_args(type_helper.m, rt), @@ -145,7 +153,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, func_name =\ "{OP}_v_{TYPES1}{SEW}m1_b{MLEN}".format_map(args) G.func( - InstInfo.get(args, decorator, InstType.REINT), + InstInfo.get( + args, decorator, InstType.REINT, required_ext=required_ext_list), name=func_name + decorator.func_suffix, return_type=mask_type, src=int_type) @@ -153,7 +162,8 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, func_name =\ "{OP}_v_b{MLEN}_{TYPES1}{SEW}m1".format_map(args) G.func( - InstInfo.get(args, decorator, InstType.REINT), + InstInfo.get( + args, decorator, InstType.REINT, required_ext=required_ext_list), name=func_name + decorator.func_suffix, return_type=int_type, src=mask_type) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py index 9e52fd0f8..3cfbcee20 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_load_template.py @@ -32,8 +32,14 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -80,7 +86,12 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, else: args["OP"] = op + nf + "e" + str(eew) - inst_info = InstInfo.get(args, decorator, inst_type, MemType.LOAD) + inst_info = InstInfo.get( + args, + decorator, + inst_type, + MemType.LOAD, + required_ext=required_ext_list) # Legacy non-tuple-type variant for the compatible header if isinstance(G, CompatibleHeaderGenerator): G.func( diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py index 4ea46f031..d52bc49a6 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/seg_store_template.py @@ -32,8 +32,14 @@ from generator import CompatibleHeaderGenerator -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -77,7 +83,12 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, else: args["OP"] = op + nf + "e" + str(eew) - inst_info = InstInfo.get(args, decorator, inst_type, MemType.STORE) + inst_info = InstInfo.get( + args, + decorator, + inst_type, + MemType.STORE, + required_ext=required_ext_list) # Legacy non-tuple-type variant for the compatible header if isinstance(G, CompatibleHeaderGenerator): G.func( diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py index 43d2f3691..9a397d3ca 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/setvl_template.py @@ -25,8 +25,14 @@ from enums import InstType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name, unused-argument # FIXME: Renaming 'G' to 'g' all in once later. # FIXME: Argument 'type_list', 'decorator_list' is unused but required for @@ -38,12 +44,14 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if args["OP"] == "vsetvlmax": G.func( - InstInfo.get(args, None, InstType.SETVLMAX), + InstInfo.get( + args, None, InstType.SETVLMAX, required_ext=required_ext_list), name="{OP}_e{SEW}m{LMUL}".format_map(args), return_type=type_helper.size_t) else: #vsetvl G.func( - InstInfo.get(args, None, InstType.SETVL), + InstInfo.get( + args, None, InstType.SETVL, required_ext=required_ext_list), name="{OP}_e{SEW}m{LMUL}".format_map(args), return_type=type_helper.size_t, avl=type_helper.size_t) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py index 524cf1134..6e62e52b0 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/store_template.py @@ -28,8 +28,14 @@ from enums import MemType -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -62,7 +68,12 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if op not in ["vsoxei", "vsuxei"] and sew != eew: continue - inst_info = InstInfo.get(args, decorator, inst_type, MemType.STORE) + inst_info = InstInfo.get( + args, + decorator, + inst_type, + MemType.STORE, + required_ext=required_ext_list) G.func( inst_info, name="{OP}_v_{TYPE}{SEW}m{LMUL}".format_map(args) + diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py index e69752b5c..c0eef1f0f 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py @@ -28,8 +28,14 @@ import copy -def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, - description): +def render(G, + op_list, + type_list, + sew_list, + lmul_list, + decorator_list, + description, + required_ext_list=None): #pylint: disable=invalid-name # FIXME: Renaming 'G' to 'g' all in once later. G.emit_function_group_description(description) @@ -63,11 +69,23 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, extra_attr = ExtraAttr.NO_ATTR inst_info_vv = InstInfo.get( - args, decorator, InstType.VV, extra_attr=extra_attr) + args, + decorator, + InstType.VV, + extra_attr=extra_attr, + required_ext=required_ext_list) inst_info_vs = InstInfo.get( - args, decorator, inst_type_vs, extra_attr=extra_attr) + args, + decorator, + inst_type_vs, + extra_attr=extra_attr, + required_ext=required_ext_list) inst_info_vvsm = InstInfo.get( - args, decorator, inst_type_vvsm, extra_attr=extra_attr) + args, + decorator, + inst_type_vvsm, + extra_attr=extra_attr, + required_ext=required_ext_list) # Special rule for vfmv_v_v, we don"t have vfmv.v.v but vmv.v.v can used # for float type, accrdoing current naming scheming it @@ -80,7 +98,11 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, if op == "merge": G.func( InstInfo.get( - vv_args, decorator, InstType.VVVM, extra_attr=extra_attr), + vv_args, + decorator, + InstType.VVVM, + extra_attr=extra_attr, + required_ext=required_ext_list), name="{OP}_vvm_{TYPE}{SEW}m{LMUL}".format_map(vv_args) + decorator.func_suffix, return_type=type_helper.v, @@ -101,7 +123,9 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, vl=type_helper.size_t) elif op == "mv": G.func( - InstInfo.get(vv_args, decorator, InstType.VV), + InstInfo.get( + vv_args, decorator, InstType.VV, + required_ext=required_ext_list), name="{OP}_v_v_{TYPE}{SEW}m{LMUL}".format_map(vv_args) + decorator.func_suffix, return_type=type_helper.v, @@ -192,7 +216,11 @@ def render(G, op_list, type_list, sew_list, lmul_list, decorator_list, continue inst_info_v = InstInfo.get( - args, decorator, inst_type, extra_attr=ExtraAttr.INT_EXTENSION) + args, + decorator, + inst_type, + extra_attr=ExtraAttr.INT_EXTENSION, + required_ext=required_ext_list) G.func( inst_info_v, From 7bb410fd5668815607a96daa5ece6d2f63da8e3e Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Fri, 30 Aug 2024 02:08:48 -0700 Subject: [PATCH 125/151] generator/api-test: always set Zvknhb instead of Zvknha Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index 2aea549df..cc26d23d5 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -554,6 +554,12 @@ def write_file_header(self, has_float_type, has_bfloat16_type, requires_exts): if requires_exts: dynamic_llvm_header = dynamic_llvm_header_prologue for ext in requires_exts: + # Due to requirements of SEW==32 intrinsics will be used + # in the LLVM test header, the extension "zvknha" + # should be replaced with "zvknhb" for the following + # SEW==64 intrinsics. + if ext == "zvknha": + ext = "zvknhb" dynamic_llvm_header += f"// RUN: -target-feature +{ext} \\\n" dynamic_llvm_header += dynamic_llvm_header_epilogue From 90f99571c10d8bc3390f6017a05315c3c4d29d84 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Mon, 9 Sep 2024 22:24:42 -0700 Subject: [PATCH 126/151] [Auto-gen] Update vector crypto tests under ../auto-generated. (make git-commit-autogen-vector-crypto-test) --- .../vector-crypto/llvm-api-tests/vaesdf.c | 11 ++++------- .../vector-crypto/llvm-api-tests/vaesdm.c | 11 ++++------- .../vector-crypto/llvm-api-tests/vaesef.c | 11 ++++------- .../vector-crypto/llvm-api-tests/vaesem.c | 11 ++++------- .../vector-crypto/llvm-api-tests/vaeskf1.c | 11 ++++------- .../vector-crypto/llvm-api-tests/vaeskf2.c | 11 ++++------- auto-generated/vector-crypto/llvm-api-tests/vaesz.c | 11 ++++------- auto-generated/vector-crypto/llvm-api-tests/vandn.c | 10 +++------- auto-generated/vector-crypto/llvm-api-tests/vbrev.c | 10 +++------- .../vector-crypto/llvm-api-tests/vbrev8.c | 10 +++------- .../vector-crypto/llvm-api-tests/vclmul.c | 10 +++------- .../vector-crypto/llvm-api-tests/vclmulh.c | 10 +++------- auto-generated/vector-crypto/llvm-api-tests/vclz.c | 10 +++------- auto-generated/vector-crypto/llvm-api-tests/vcpop.c | 5 ++++- auto-generated/vector-crypto/llvm-api-tests/vctz.c | 10 +++------- auto-generated/vector-crypto/llvm-api-tests/vghsh.c | 11 ++++------- auto-generated/vector-crypto/llvm-api-tests/vgmul.c | 11 ++++------- auto-generated/vector-crypto/llvm-api-tests/vrev8.c | 10 +++------- auto-generated/vector-crypto/llvm-api-tests/vrol.c | 12 ++++-------- auto-generated/vector-crypto/llvm-api-tests/vror.c | 12 ++++-------- .../vector-crypto/llvm-api-tests/vsha2ch.c | 11 ++++------- .../vector-crypto/llvm-api-tests/vsha2cl.c | 11 ++++------- .../vector-crypto/llvm-api-tests/vsha2ms.c | 11 ++++------- auto-generated/vector-crypto/llvm-api-tests/vsm3c.c | 13 +++++-------- .../vector-crypto/llvm-api-tests/vsm3me.c | 13 +++++-------- auto-generated/vector-crypto/llvm-api-tests/vsm4k.c | 11 ++++------- auto-generated/vector-crypto/llvm-api-tests/vsm4r.c | 11 ++++------- auto-generated/vector-crypto/llvm-api-tests/vwsll.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vaesdf.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vaesdm.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vaesef.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vaesem.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vaeskf1.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vaeskf2.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vaesz.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vandn.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vbrev.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vbrev8.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vclmul.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vclmulh.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vclz.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vcpop.c | 5 ++++- .../vector-crypto/llvm-overloaded-tests/vctz.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vghsh.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vgmul.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vrev8.c | 10 +++------- .../vector-crypto/llvm-overloaded-tests/vrol.c | 12 ++++-------- .../vector-crypto/llvm-overloaded-tests/vror.c | 12 ++++-------- .../vector-crypto/llvm-overloaded-tests/vsha2ch.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vsha2cl.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vsha2ms.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vsm3c.c | 13 +++++-------- .../vector-crypto/llvm-overloaded-tests/vsm3me.c | 13 +++++-------- .../vector-crypto/llvm-overloaded-tests/vsm4k.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vsm4r.c | 11 ++++------- .../vector-crypto/llvm-overloaded-tests/vwsll.c | 10 +++------- .../policy_funcs/llvm-api-tests/vaesdf.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vaesdm.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vaesef.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vaesem.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vaeskf1.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vaeskf2.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vaesz.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vandn.c | 10 +++------- .../policy_funcs/llvm-api-tests/vbrev.c | 10 +++------- .../policy_funcs/llvm-api-tests/vbrev8.c | 10 +++------- .../policy_funcs/llvm-api-tests/vclmul.c | 10 +++------- .../policy_funcs/llvm-api-tests/vclmulh.c | 10 +++------- .../policy_funcs/llvm-api-tests/vclz.c | 10 +++------- .../policy_funcs/llvm-api-tests/vcpop.c | 5 ++++- .../policy_funcs/llvm-api-tests/vctz.c | 10 +++------- .../policy_funcs/llvm-api-tests/vghsh.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vgmul.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vrev8.c | 10 +++------- .../policy_funcs/llvm-api-tests/vrol.c | 12 ++++-------- .../policy_funcs/llvm-api-tests/vror.c | 12 ++++-------- .../policy_funcs/llvm-api-tests/vsha2ch.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vsha2cl.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vsha2ms.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vsm3c.c | 13 +++++-------- .../policy_funcs/llvm-api-tests/vsm3me.c | 13 +++++-------- .../policy_funcs/llvm-api-tests/vsm4k.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vsm4r.c | 11 ++++------- .../policy_funcs/llvm-api-tests/vwsll.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vaesdf.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vaesdm.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vaesef.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vaesem.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vaeskf1.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vaeskf2.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vaesz.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vandn.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vbrev.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vbrev8.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vclmul.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vclmulh.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vclz.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vcpop.c | 5 ++++- .../policy_funcs/llvm-overloaded-tests/vctz.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vghsh.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vgmul.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vrev8.c | 10 +++------- .../policy_funcs/llvm-overloaded-tests/vrol.c | 12 ++++-------- .../policy_funcs/llvm-overloaded-tests/vror.c | 12 ++++-------- .../policy_funcs/llvm-overloaded-tests/vsha2ch.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vsha2cl.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vsha2ms.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vsm3c.c | 13 +++++-------- .../policy_funcs/llvm-overloaded-tests/vsm3me.c | 13 +++++-------- .../policy_funcs/llvm-overloaded-tests/vsm4k.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vsm4r.c | 11 ++++------- .../policy_funcs/llvm-overloaded-tests/vwsll.c | 10 +++------- 112 files changed, 420 insertions(+), 776 deletions(-) diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c index 715c7881c..21e8e315d 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdf.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c index c35b87b37..eca20ba2a 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesdm.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c index 081cfe140..ba3bcc789 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesef.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c index cf43774f1..73b616b05 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesem.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c index b92fbdead..85a704734 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf1.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c index aa796c5b2..f40d4c10e 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaeskf2.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c index bdb19ece1..fd0f962f1 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vaesz.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/llvm-api-tests/vandn.c index 3f8f4c0a5..9ffd256d1 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vandn.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vandn.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/llvm-api-tests/vbrev.c index 602551b22..40172b4b2 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vbrev.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vbrev.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c index dbb64b45e..27e4b77fc 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vbrev8.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/llvm-api-tests/vclmul.c index d6697a372..e1b7a953f 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclmul.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclmul.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c index 94fbc51e7..96dc0cf1f 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclmulh.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vclz.c b/auto-generated/vector-crypto/llvm-api-tests/vclz.c index 6320cf1a7..287966d43 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vclz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vclz.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vcpop.c b/auto-generated/vector-crypto/llvm-api-tests/vcpop.c index 1061c2222..c9402938f 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vcpop.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vcpop.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vctz.c b/auto-generated/vector-crypto/llvm-api-tests/vctz.c index 926741260..4f78b7c06 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vctz.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vctz.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/llvm-api-tests/vghsh.c index 6b2db98f6..8b5c8bd9e 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vghsh.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vghsh.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/llvm-api-tests/vgmul.c index 1abf16248..b7fe95b7e 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vgmul.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vgmul.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/llvm-api-tests/vrev8.c index 717dfd27d..f446fbba0 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vrev8.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vrev8.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/llvm-api-tests/vrol.c index 1bddb1516..75ac4fb6b 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vrol.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vrol.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvkb \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vror.c b/auto-generated/vector-crypto/llvm-api-tests/vror.c index 073c1fe05..a3e2aeb24 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vror.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vror.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvkb \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c index 78924df94..1f6588d4d 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2ch.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c index 739a9da5e..203c7c95e 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2cl.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c index 72201942a..76756a0df 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsha2ms.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c index 06ae64701..e17e45eef 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3c.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvksh \ +// RUN: -target-feature +zvl512b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c index 9aefcd323..41d046bd4 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm3me.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvksh \ +// RUN: -target-feature +zvl512b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c index e5f6bd386..da0dfdbed 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4k.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c index 8119a4331..44bda79e3 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vsm4r.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c index eda2e00d3..a37286747 100644 --- a/auto-generated/vector-crypto/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/llvm-api-tests/vwsll.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c index 83837f66d..8d2efcc47 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdf.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c index 6bc6faa5b..1daf37f50 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesdm.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c index a42aac84e..9d38a49c8 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesef.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c index 2cb5113a7..a91b0075c 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesem.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c index 393a2329f..62938b00f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf1.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c index e1d85453a..2d13d1171 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaeskf2.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c index b98fe52ba..80782358f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vaesz.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c index 302997b03..3f4f7fcad 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vandn.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c index 9654a13b5..0482ec227 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c index 68503540f..46bf35822 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vbrev8.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c index 994a54025..c9abd238c 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmul.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c index fbfa406f6..a5016cf2f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclmulh.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c index c6a727dfc..92340eb2c 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vclz.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vcpop.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vcpop.c index aee4aff80..7de4d28b5 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vcpop.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vcpop.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c index 10223ef94..df3e30371 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vctz.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c index bd18a2c0e..05675405b 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vghsh.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c index ed81badec..331cfa03f 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vgmul.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c index 6f491581c..5026de6d0 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vrev8.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c index 2e24afa14..0eb55a7c1 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vrol.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvkb \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c index 6fdd3e527..58a524bd5 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vror.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvkb \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c index 2924cdc47..77fe9d052 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ch.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c index b2078e33d..e276391b4 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2cl.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c index e1afaede7..86ff2dad4 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsha2ms.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c index 3d23e0142..6bd7e91ee 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3c.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvksh \ +// RUN: -target-feature +zvl512b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c index 86f271de7..71c9ffd48 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm3me.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvksh \ +// RUN: -target-feature +zvl512b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c index 248207cfc..3392f6b31 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4k.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c index 6cb46317c..9bd6eb604 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vsm4r.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c index 029180986..f0c50a31b 100644 --- a/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c +++ b/auto-generated/vector-crypto/llvm-overloaded-tests/vwsll.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c index a7b8c5908..095eecc39 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdf.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c index fa584ff88..a05299453 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesdm.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c index 5c86c8a5f..a83cb7537 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesef.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c index 2b3953414..694bdd477 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesem.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c index dd6db77aa..ef48dab61 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf1.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c index 3dcac9ffe..8fb8dc865 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaeskf2.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c index 2c4925eb3..dc8a1ae53 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vaesz.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c index 2de24fc21..99552ff27 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vandn.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c index 764297558..f83e035af 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c index abbd91ffa..0a168be50 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vbrev8.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c index a7793a209..33bebbec7 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmul.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c index 3962a5f4a..414bb847b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclmulh.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c index 44a5e7fce..dd89af321 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vclz.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c index 757deb078..e152d352b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vcpop.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c index 3c13ebff5..27ee5ed9f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vctz.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c index 7c773896d..b2f1ea776 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vghsh.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c index 35f8f63da..962268b6c 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vgmul.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c index 45db2ce1c..6c9e84219 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrev8.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c index f87baacb1..7c3c2336e 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vrol.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvkb \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c index 5fd654a54..20a976d46 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vror.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvkb \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c index 25b773014..8c3e787d9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ch.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c index e12c2dacd..62a1b3541 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2cl.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c index f438925f4..0995653a9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsha2ms.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c index 8e783a3d3..22a8847ec 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3c.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvksh \ +// RUN: -target-feature +zvl512b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c index c651f5ee9..40c72778e 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm3me.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvksh \ +// RUN: -target-feature +zvl512b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c index bc4ce8981..2666c99b4 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4k.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c index 7f9a4a749..297482bb4 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vsm4r.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c index 07784ec85..81c7d7ff0 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-api-tests/vwsll.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c index 6d2504f27..b6d6c71d9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdf.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c index 1fa488b00..0b62e82bc 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesdm.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c index 5635721bb..1e38c9d0c 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesef.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c index a7f05f2b8..0016f2c53 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesem.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c index c3f94c976..f238e28a9 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf1.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c index 2df41ac05..17143962f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaeskf2.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c index 877402aee..261636582 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vaesz.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c index af084405c..b033e4854 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vandn.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c index a9c542556..3f387f699 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c index a986b5ece..5b5370b16 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vbrev8.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c index 11f24f0b9..d4d4fefef 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmul.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c index f9a4a8af7..ae564fff3 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclmulh.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclz.c index e93b008a3..c22c4a4d1 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vclz.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vcpop.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vcpop.c index 4eb8efa2b..43f0b9a1b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vcpop.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vcpop.c @@ -1,5 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zvbb \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vctz.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vctz.c index 8cecc11d2..b0e5600ff 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vctz.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vctz.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c index 5a1670759..cb4cacaee 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vghsh.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c index 995625243..650dba88f 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vgmul.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c index 62c1e3e1e..2167a18c3 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrev8.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c index 6617d9830..bf4c23a9b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vrol.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvkb \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c index 0fb6a2d3f..cf90f3590 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vror.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvkb \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c index e61e23e6d..7aec85acb 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ch.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c index 5ca7969f5..e9931822b 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2cl.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c index ef3478429..1538fd2c4 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsha2ms.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c index 3bc96a360..e2eeac8fa 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3c.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvksh \ +// RUN: -target-feature +zvl512b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c index 2fd5ab2ed..d8c4eb158 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm3me.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +zvksh \ +// RUN: -target-feature +zvl512b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c index acf15ab27..d4e9267d8 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4k.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c index e8ba1fd59..2403a6f60 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vsm4r.c @@ -1,12 +1,9 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ -// RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ // RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zvl256b \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c index 49b8ee5a0..04949b9ca 100644 --- a/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c +++ b/auto-generated/vector-crypto/policy_funcs/llvm-overloaded-tests/vwsll.c @@ -1,12 +1,8 @@ // REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zvl512b \ +// RUN: %clang_cc1 -triple riscv64 -disable-O0-optnone \ // RUN: -target-feature +zvbb \ -// RUN: -target-feature +zvbc \ -// RUN: -target-feature +zvkg \ -// RUN: -target-feature +zvkned \ -// RUN: -target-feature +zvknhb \ -// RUN: -target-feature +zvksed \ -// RUN: -target-feature +zvksh -disable-O0-optnone \ +// RUN: -target-feature +zve64x \ +// RUN: -target-feature +experimental \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s From 0bd936331077a6ac4644951969d80e6a1bfb6adf Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Sun, 1 Sep 2024 19:06:13 -0700 Subject: [PATCH 127/151] makefile: update llvm tests for Zvk and BF16 intrinsics Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/Makefile | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/rvv-intrinsic-generator/Makefile b/rvv-intrinsic-generator/Makefile index dbd9b1f82..f52ca80f0 100644 --- a/rvv-intrinsic-generator/Makefile +++ b/rvv-intrinsic-generator/Makefile @@ -639,3 +639,13 @@ update-clang-test: cp $(OUTPUT_DIR)/llvm-overloaded-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/non-policy/overloaded/ cp $(OUTPUT_DIR)/policy_funcs/llvm-api-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/policy/non-overloaded/ cp $(OUTPUT_DIR)/policy_funcs/llvm-overloaded-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/policy/overloaded/ + + cp $(OUTPUT_DIR)/bfloat16/llvm-api-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/non-policy/non-overloaded/bfloat16/ + cp $(OUTPUT_DIR)/bfloat16/llvm-overloaded-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/non-policy/overloaded/bfloat16/ + cp $(OUTPUT_DIR)/bfloat16/policy_funcs/llvm-api-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/policy/non-overloaded/bfloat16/ + cp $(OUTPUT_DIR)/bfloat16/policy_funcs/llvm-overloaded-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/policy/overloaded/bfloat16/ + + cp $(OUTPUT_DIR)/vector-crypto/llvm-api-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/non-policy/non-overloaded/ + cp $(OUTPUT_DIR)/vector-crypto/llvm-overloaded-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/non-policy/overloaded/ + cp $(OUTPUT_DIR)/vector-crypto/policy_funcs/llvm-api-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/policy/non-overloaded/ + cp $(OUTPUT_DIR)/vector-crypto/policy_funcs/llvm-overloaded-tests/*.c $(CLANG_TEST_DIR)/CodeGen/RISCV/rvv-intrinsics-autogenerated/policy/overloaded/ From 2ee93f99cbdb188228a29edaa8ec46abc2480ffe Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Sun, 1 Sep 2024 19:07:04 -0700 Subject: [PATCH 128/151] generator: remove expermental for llvm headers Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py index cc26d23d5..689de8a97 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/generator.py @@ -530,15 +530,15 @@ def write_file_header(self, has_float_type, has_bfloat16_type, requires_exts): """ float_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s """ bfloat16_llvm_header = r"""// REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s From b3d08dc371bf59bef881952a67f4e8f124d0f53e Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Sun, 1 Sep 2024 19:11:34 -0700 Subject: [PATCH 129/151] [Auto-gen] Update tests under ../auto-generated. (make git-commit-autogen-test) --- auto-generated/bfloat16/llvm-api-tests/vcreate.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vget.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vle16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vle16ff.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vloxei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlse16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vluxei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vreinterpret.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vse16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vset.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsoxei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsse16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsuxei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c | 4 ++-- auto-generated/bfloat16/llvm-api-tests/vundefined.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vget.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vle16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vse16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vset.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c | 4 ++-- auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c | 4 ++-- auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c | 4 ++-- auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vloxei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c | 4 ++-- auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vluxei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c | 4 ++-- .../bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c | 4 ++-- .../policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c | 4 ++-- auto-generated/llvm-api-tests/vcompress.c | 2 +- auto-generated/llvm-api-tests/vcpop.c | 2 +- auto-generated/llvm-api-tests/vcreate.c | 2 +- auto-generated/llvm-api-tests/vfabs.c | 2 +- auto-generated/llvm-api-tests/vfadd.c | 2 +- auto-generated/llvm-api-tests/vfclass.c | 2 +- auto-generated/llvm-api-tests/vfcvt.c | 2 +- auto-generated/llvm-api-tests/vfcvt_rtz.c | 2 +- auto-generated/llvm-api-tests/vfdiv.c | 2 +- auto-generated/llvm-api-tests/vfmacc.c | 2 +- auto-generated/llvm-api-tests/vfmadd.c | 2 +- auto-generated/llvm-api-tests/vfmax.c | 2 +- auto-generated/llvm-api-tests/vfmerge.c | 2 +- auto-generated/llvm-api-tests/vfmin.c | 2 +- auto-generated/llvm-api-tests/vfmsac.c | 2 +- auto-generated/llvm-api-tests/vfmsub.c | 2 +- auto-generated/llvm-api-tests/vfmul.c | 2 +- auto-generated/llvm-api-tests/vfmv.c | 2 +- auto-generated/llvm-api-tests/vfncvt.c | 2 +- auto-generated/llvm-api-tests/vfncvt_rod.c | 2 +- auto-generated/llvm-api-tests/vfncvt_rtz.c | 2 +- auto-generated/llvm-api-tests/vfneg.c | 2 +- auto-generated/llvm-api-tests/vfnmacc.c | 2 +- auto-generated/llvm-api-tests/vfnmadd.c | 2 +- auto-generated/llvm-api-tests/vfnmsac.c | 2 +- auto-generated/llvm-api-tests/vfnmsub.c | 2 +- auto-generated/llvm-api-tests/vfrdiv.c | 2 +- auto-generated/llvm-api-tests/vfrec7.c | 2 +- auto-generated/llvm-api-tests/vfredmax.c | 2 +- auto-generated/llvm-api-tests/vfredmin.c | 2 +- auto-generated/llvm-api-tests/vfredosum.c | 2 +- auto-generated/llvm-api-tests/vfredusum.c | 2 +- auto-generated/llvm-api-tests/vfrsqrt7.c | 2 +- auto-generated/llvm-api-tests/vfrsub.c | 2 +- auto-generated/llvm-api-tests/vfsgnj.c | 2 +- auto-generated/llvm-api-tests/vfsgnjn.c | 2 +- auto-generated/llvm-api-tests/vfsgnjx.c | 2 +- auto-generated/llvm-api-tests/vfslide1down.c | 2 +- auto-generated/llvm-api-tests/vfslide1up.c | 2 +- auto-generated/llvm-api-tests/vfsqrt.c | 2 +- auto-generated/llvm-api-tests/vfsub.c | 2 +- auto-generated/llvm-api-tests/vfwadd.c | 2 +- auto-generated/llvm-api-tests/vfwcvt.c | 2 +- auto-generated/llvm-api-tests/vfwcvt_rtz.c | 2 +- auto-generated/llvm-api-tests/vfwmacc.c | 2 +- auto-generated/llvm-api-tests/vfwmsac.c | 2 +- auto-generated/llvm-api-tests/vfwmul.c | 2 +- auto-generated/llvm-api-tests/vfwnmacc.c | 2 +- auto-generated/llvm-api-tests/vfwnmsac.c | 2 +- auto-generated/llvm-api-tests/vfwredosum.c | 2 +- auto-generated/llvm-api-tests/vfwredusum.c | 2 +- auto-generated/llvm-api-tests/vfwsub.c | 2 +- auto-generated/llvm-api-tests/vget.c | 2 +- auto-generated/llvm-api-tests/vle16.c | 2 +- auto-generated/llvm-api-tests/vle16ff.c | 2 +- auto-generated/llvm-api-tests/vle32.c | 2 +- auto-generated/llvm-api-tests/vle32ff.c | 2 +- auto-generated/llvm-api-tests/vle64.c | 2 +- auto-generated/llvm-api-tests/vle64ff.c | 2 +- auto-generated/llvm-api-tests/vle8.c | 2 +- auto-generated/llvm-api-tests/vle8ff.c | 2 +- auto-generated/llvm-api-tests/vlmul_ext_v.c | 2 +- auto-generated/llvm-api-tests/vlmul_trunc_v.c | 2 +- auto-generated/llvm-api-tests/vloxei16.c | 2 +- auto-generated/llvm-api-tests/vloxei32.c | 2 +- auto-generated/llvm-api-tests/vloxei64.c | 2 +- auto-generated/llvm-api-tests/vloxei8.c | 2 +- auto-generated/llvm-api-tests/vloxseg2ei16.c | 2 +- auto-generated/llvm-api-tests/vloxseg2ei32.c | 2 +- auto-generated/llvm-api-tests/vloxseg2ei64.c | 2 +- auto-generated/llvm-api-tests/vloxseg2ei8.c | 2 +- auto-generated/llvm-api-tests/vloxseg3ei16.c | 2 +- auto-generated/llvm-api-tests/vloxseg3ei32.c | 2 +- auto-generated/llvm-api-tests/vloxseg3ei64.c | 2 +- auto-generated/llvm-api-tests/vloxseg3ei8.c | 2 +- auto-generated/llvm-api-tests/vloxseg4ei16.c | 2 +- auto-generated/llvm-api-tests/vloxseg4ei32.c | 2 +- auto-generated/llvm-api-tests/vloxseg4ei64.c | 2 +- auto-generated/llvm-api-tests/vloxseg4ei8.c | 2 +- auto-generated/llvm-api-tests/vloxseg5ei16.c | 2 +- auto-generated/llvm-api-tests/vloxseg5ei32.c | 2 +- auto-generated/llvm-api-tests/vloxseg5ei64.c | 2 +- auto-generated/llvm-api-tests/vloxseg5ei8.c | 2 +- auto-generated/llvm-api-tests/vloxseg6ei16.c | 2 +- auto-generated/llvm-api-tests/vloxseg6ei32.c | 2 +- auto-generated/llvm-api-tests/vloxseg6ei64.c | 2 +- auto-generated/llvm-api-tests/vloxseg6ei8.c | 2 +- auto-generated/llvm-api-tests/vloxseg7ei16.c | 2 +- auto-generated/llvm-api-tests/vloxseg7ei32.c | 2 +- auto-generated/llvm-api-tests/vloxseg7ei64.c | 2 +- auto-generated/llvm-api-tests/vloxseg7ei8.c | 2 +- auto-generated/llvm-api-tests/vloxseg8ei16.c | 2 +- auto-generated/llvm-api-tests/vloxseg8ei32.c | 2 +- auto-generated/llvm-api-tests/vloxseg8ei64.c | 2 +- auto-generated/llvm-api-tests/vloxseg8ei8.c | 2 +- auto-generated/llvm-api-tests/vlse16.c | 2 +- auto-generated/llvm-api-tests/vlse32.c | 2 +- auto-generated/llvm-api-tests/vlse64.c | 2 +- auto-generated/llvm-api-tests/vlseg2e16.c | 2 +- auto-generated/llvm-api-tests/vlseg2e16ff.c | 2 +- auto-generated/llvm-api-tests/vlseg2e32.c | 2 +- auto-generated/llvm-api-tests/vlseg2e32ff.c | 2 +- auto-generated/llvm-api-tests/vlseg2e64.c | 2 +- auto-generated/llvm-api-tests/vlseg2e64ff.c | 2 +- auto-generated/llvm-api-tests/vlseg2e8ff.c | 2 +- auto-generated/llvm-api-tests/vlseg3e16.c | 2 +- auto-generated/llvm-api-tests/vlseg3e16ff.c | 2 +- auto-generated/llvm-api-tests/vlseg3e32.c | 2 +- auto-generated/llvm-api-tests/vlseg3e32ff.c | 2 +- auto-generated/llvm-api-tests/vlseg3e64.c | 2 +- auto-generated/llvm-api-tests/vlseg3e64ff.c | 2 +- auto-generated/llvm-api-tests/vlseg3e8ff.c | 2 +- auto-generated/llvm-api-tests/vlseg4e16.c | 2 +- auto-generated/llvm-api-tests/vlseg4e16ff.c | 2 +- auto-generated/llvm-api-tests/vlseg4e32.c | 2 +- auto-generated/llvm-api-tests/vlseg4e32ff.c | 2 +- auto-generated/llvm-api-tests/vlseg4e64.c | 2 +- auto-generated/llvm-api-tests/vlseg4e64ff.c | 2 +- auto-generated/llvm-api-tests/vlseg4e8ff.c | 2 +- auto-generated/llvm-api-tests/vlseg5e16.c | 2 +- auto-generated/llvm-api-tests/vlseg5e16ff.c | 2 +- auto-generated/llvm-api-tests/vlseg5e32.c | 2 +- auto-generated/llvm-api-tests/vlseg5e32ff.c | 2 +- auto-generated/llvm-api-tests/vlseg5e64.c | 2 +- auto-generated/llvm-api-tests/vlseg5e64ff.c | 2 +- auto-generated/llvm-api-tests/vlseg5e8ff.c | 2 +- auto-generated/llvm-api-tests/vlseg6e16.c | 2 +- auto-generated/llvm-api-tests/vlseg6e16ff.c | 2 +- auto-generated/llvm-api-tests/vlseg6e32.c | 2 +- auto-generated/llvm-api-tests/vlseg6e32ff.c | 2 +- auto-generated/llvm-api-tests/vlseg6e64.c | 2 +- auto-generated/llvm-api-tests/vlseg6e64ff.c | 2 +- auto-generated/llvm-api-tests/vlseg6e8ff.c | 2 +- auto-generated/llvm-api-tests/vlseg7e16.c | 2 +- auto-generated/llvm-api-tests/vlseg7e16ff.c | 2 +- auto-generated/llvm-api-tests/vlseg7e32.c | 2 +- auto-generated/llvm-api-tests/vlseg7e32ff.c | 2 +- auto-generated/llvm-api-tests/vlseg7e64.c | 2 +- auto-generated/llvm-api-tests/vlseg7e64ff.c | 2 +- auto-generated/llvm-api-tests/vlseg7e8ff.c | 2 +- auto-generated/llvm-api-tests/vlseg8e16.c | 2 +- auto-generated/llvm-api-tests/vlseg8e16ff.c | 2 +- auto-generated/llvm-api-tests/vlseg8e32.c | 2 +- auto-generated/llvm-api-tests/vlseg8e32ff.c | 2 +- auto-generated/llvm-api-tests/vlseg8e64.c | 2 +- auto-generated/llvm-api-tests/vlseg8e64ff.c | 2 +- auto-generated/llvm-api-tests/vlseg8e8ff.c | 2 +- auto-generated/llvm-api-tests/vlsseg2e16.c | 2 +- auto-generated/llvm-api-tests/vlsseg2e32.c | 2 +- auto-generated/llvm-api-tests/vlsseg2e64.c | 2 +- auto-generated/llvm-api-tests/vlsseg3e16.c | 2 +- auto-generated/llvm-api-tests/vlsseg3e32.c | 2 +- auto-generated/llvm-api-tests/vlsseg3e64.c | 2 +- auto-generated/llvm-api-tests/vlsseg4e16.c | 2 +- auto-generated/llvm-api-tests/vlsseg4e32.c | 2 +- auto-generated/llvm-api-tests/vlsseg4e64.c | 2 +- auto-generated/llvm-api-tests/vlsseg5e16.c | 2 +- auto-generated/llvm-api-tests/vlsseg5e32.c | 2 +- auto-generated/llvm-api-tests/vlsseg5e64.c | 2 +- auto-generated/llvm-api-tests/vlsseg6e16.c | 2 +- auto-generated/llvm-api-tests/vlsseg6e32.c | 2 +- auto-generated/llvm-api-tests/vlsseg6e64.c | 2 +- auto-generated/llvm-api-tests/vlsseg7e16.c | 2 +- auto-generated/llvm-api-tests/vlsseg7e32.c | 2 +- auto-generated/llvm-api-tests/vlsseg7e64.c | 2 +- auto-generated/llvm-api-tests/vlsseg8e16.c | 2 +- auto-generated/llvm-api-tests/vlsseg8e32.c | 2 +- auto-generated/llvm-api-tests/vlsseg8e64.c | 2 +- auto-generated/llvm-api-tests/vluxei16.c | 2 +- auto-generated/llvm-api-tests/vluxei32.c | 2 +- auto-generated/llvm-api-tests/vluxei64.c | 2 +- auto-generated/llvm-api-tests/vluxei8.c | 2 +- auto-generated/llvm-api-tests/vluxseg2ei16.c | 2 +- auto-generated/llvm-api-tests/vluxseg2ei32.c | 2 +- auto-generated/llvm-api-tests/vluxseg2ei64.c | 2 +- auto-generated/llvm-api-tests/vluxseg2ei8.c | 2 +- auto-generated/llvm-api-tests/vluxseg3ei16.c | 2 +- auto-generated/llvm-api-tests/vluxseg3ei32.c | 2 +- auto-generated/llvm-api-tests/vluxseg3ei64.c | 2 +- auto-generated/llvm-api-tests/vluxseg3ei8.c | 2 +- auto-generated/llvm-api-tests/vluxseg4ei16.c | 2 +- auto-generated/llvm-api-tests/vluxseg4ei32.c | 2 +- auto-generated/llvm-api-tests/vluxseg4ei64.c | 2 +- auto-generated/llvm-api-tests/vluxseg4ei8.c | 2 +- auto-generated/llvm-api-tests/vluxseg5ei16.c | 2 +- auto-generated/llvm-api-tests/vluxseg5ei32.c | 2 +- auto-generated/llvm-api-tests/vluxseg5ei64.c | 2 +- auto-generated/llvm-api-tests/vluxseg5ei8.c | 2 +- auto-generated/llvm-api-tests/vluxseg6ei16.c | 2 +- auto-generated/llvm-api-tests/vluxseg6ei32.c | 2 +- auto-generated/llvm-api-tests/vluxseg6ei64.c | 2 +- auto-generated/llvm-api-tests/vluxseg6ei8.c | 2 +- auto-generated/llvm-api-tests/vluxseg7ei16.c | 2 +- auto-generated/llvm-api-tests/vluxseg7ei32.c | 2 +- auto-generated/llvm-api-tests/vluxseg7ei64.c | 2 +- auto-generated/llvm-api-tests/vluxseg7ei8.c | 2 +- auto-generated/llvm-api-tests/vluxseg8ei16.c | 2 +- auto-generated/llvm-api-tests/vluxseg8ei32.c | 2 +- auto-generated/llvm-api-tests/vluxseg8ei64.c | 2 +- auto-generated/llvm-api-tests/vluxseg8ei8.c | 2 +- auto-generated/llvm-api-tests/vmacc.c | 2 +- auto-generated/llvm-api-tests/vmadd.c | 2 +- auto-generated/llvm-api-tests/vmerge.c | 2 +- auto-generated/llvm-api-tests/vmfeq.c | 2 +- auto-generated/llvm-api-tests/vmfge.c | 2 +- auto-generated/llvm-api-tests/vmfgt.c | 2 +- auto-generated/llvm-api-tests/vmfle.c | 2 +- auto-generated/llvm-api-tests/vmflt.c | 2 +- auto-generated/llvm-api-tests/vmfne.c | 2 +- auto-generated/llvm-api-tests/vmmv.c | 2 +- auto-generated/llvm-api-tests/vmseq.c | 2 +- auto-generated/llvm-api-tests/vmsge.c | 2 +- auto-generated/llvm-api-tests/vmsgeu.c | 2 +- auto-generated/llvm-api-tests/vmsgt.c | 2 +- auto-generated/llvm-api-tests/vmsgtu.c | 2 +- auto-generated/llvm-api-tests/vmsle.c | 2 +- auto-generated/llvm-api-tests/vmsleu.c | 2 +- auto-generated/llvm-api-tests/vmslt.c | 2 +- auto-generated/llvm-api-tests/vmsltu.c | 2 +- auto-generated/llvm-api-tests/vmsne.c | 2 +- auto-generated/llvm-api-tests/vmv.c | 2 +- auto-generated/llvm-api-tests/vneg.c | 2 +- auto-generated/llvm-api-tests/vnmsac.c | 2 +- auto-generated/llvm-api-tests/vnmsub.c | 2 +- auto-generated/llvm-api-tests/vreinterpret.c | 2 +- auto-generated/llvm-api-tests/vrgather.c | 2 +- auto-generated/llvm-api-tests/vrgatherei16.c | 2 +- auto-generated/llvm-api-tests/vse16.c | 2 +- auto-generated/llvm-api-tests/vse32.c | 2 +- auto-generated/llvm-api-tests/vse64.c | 2 +- auto-generated/llvm-api-tests/vset.c | 2 +- auto-generated/llvm-api-tests/vslidedown.c | 2 +- auto-generated/llvm-api-tests/vslideup.c | 2 +- auto-generated/llvm-api-tests/vsoxei16.c | 2 +- auto-generated/llvm-api-tests/vsoxei32.c | 2 +- auto-generated/llvm-api-tests/vsoxei64.c | 2 +- auto-generated/llvm-api-tests/vsoxei8.c | 2 +- auto-generated/llvm-api-tests/vsoxseg2ei16.c | 2 +- auto-generated/llvm-api-tests/vsoxseg2ei32.c | 2 +- auto-generated/llvm-api-tests/vsoxseg2ei64.c | 2 +- auto-generated/llvm-api-tests/vsoxseg2ei8.c | 2 +- auto-generated/llvm-api-tests/vsoxseg3ei16.c | 2 +- auto-generated/llvm-api-tests/vsoxseg3ei32.c | 2 +- auto-generated/llvm-api-tests/vsoxseg3ei64.c | 2 +- auto-generated/llvm-api-tests/vsoxseg3ei8.c | 2 +- auto-generated/llvm-api-tests/vsoxseg4ei16.c | 2 +- auto-generated/llvm-api-tests/vsoxseg4ei32.c | 2 +- auto-generated/llvm-api-tests/vsoxseg4ei64.c | 2 +- auto-generated/llvm-api-tests/vsoxseg4ei8.c | 2 +- auto-generated/llvm-api-tests/vsoxseg5ei16.c | 2 +- auto-generated/llvm-api-tests/vsoxseg5ei32.c | 2 +- auto-generated/llvm-api-tests/vsoxseg5ei64.c | 2 +- auto-generated/llvm-api-tests/vsoxseg5ei8.c | 2 +- auto-generated/llvm-api-tests/vsoxseg6ei16.c | 2 +- auto-generated/llvm-api-tests/vsoxseg6ei32.c | 2 +- auto-generated/llvm-api-tests/vsoxseg6ei64.c | 2 +- auto-generated/llvm-api-tests/vsoxseg6ei8.c | 2 +- auto-generated/llvm-api-tests/vsoxseg7ei16.c | 2 +- auto-generated/llvm-api-tests/vsoxseg7ei32.c | 2 +- auto-generated/llvm-api-tests/vsoxseg7ei64.c | 2 +- auto-generated/llvm-api-tests/vsoxseg7ei8.c | 2 +- auto-generated/llvm-api-tests/vsoxseg8ei16.c | 2 +- auto-generated/llvm-api-tests/vsoxseg8ei32.c | 2 +- auto-generated/llvm-api-tests/vsoxseg8ei64.c | 2 +- auto-generated/llvm-api-tests/vsoxseg8ei8.c | 2 +- auto-generated/llvm-api-tests/vsse16.c | 2 +- auto-generated/llvm-api-tests/vsse32.c | 2 +- auto-generated/llvm-api-tests/vsse64.c | 2 +- auto-generated/llvm-api-tests/vsseg2e16.c | 2 +- auto-generated/llvm-api-tests/vsseg2e32.c | 2 +- auto-generated/llvm-api-tests/vsseg2e64.c | 2 +- auto-generated/llvm-api-tests/vsseg3e16.c | 2 +- auto-generated/llvm-api-tests/vsseg3e32.c | 2 +- auto-generated/llvm-api-tests/vsseg3e64.c | 2 +- auto-generated/llvm-api-tests/vsseg4e16.c | 2 +- auto-generated/llvm-api-tests/vsseg4e32.c | 2 +- auto-generated/llvm-api-tests/vsseg4e64.c | 2 +- auto-generated/llvm-api-tests/vsseg5e16.c | 2 +- auto-generated/llvm-api-tests/vsseg5e32.c | 2 +- auto-generated/llvm-api-tests/vsseg5e64.c | 2 +- auto-generated/llvm-api-tests/vsseg6e16.c | 2 +- auto-generated/llvm-api-tests/vsseg6e32.c | 2 +- auto-generated/llvm-api-tests/vsseg6e64.c | 2 +- auto-generated/llvm-api-tests/vsseg7e16.c | 2 +- auto-generated/llvm-api-tests/vsseg7e32.c | 2 +- auto-generated/llvm-api-tests/vsseg7e64.c | 2 +- auto-generated/llvm-api-tests/vsseg8e16.c | 2 +- auto-generated/llvm-api-tests/vsseg8e32.c | 2 +- auto-generated/llvm-api-tests/vsseg8e64.c | 2 +- auto-generated/llvm-api-tests/vssseg2e16.c | 2 +- auto-generated/llvm-api-tests/vssseg2e32.c | 2 +- auto-generated/llvm-api-tests/vssseg2e64.c | 2 +- auto-generated/llvm-api-tests/vssseg3e16.c | 2 +- auto-generated/llvm-api-tests/vssseg3e32.c | 2 +- auto-generated/llvm-api-tests/vssseg3e64.c | 2 +- auto-generated/llvm-api-tests/vssseg4e16.c | 2 +- auto-generated/llvm-api-tests/vssseg4e32.c | 2 +- auto-generated/llvm-api-tests/vssseg4e64.c | 2 +- auto-generated/llvm-api-tests/vssseg5e16.c | 2 +- auto-generated/llvm-api-tests/vssseg5e32.c | 2 +- auto-generated/llvm-api-tests/vssseg5e64.c | 2 +- auto-generated/llvm-api-tests/vssseg6e16.c | 2 +- auto-generated/llvm-api-tests/vssseg6e32.c | 2 +- auto-generated/llvm-api-tests/vssseg6e64.c | 2 +- auto-generated/llvm-api-tests/vssseg7e16.c | 2 +- auto-generated/llvm-api-tests/vssseg7e32.c | 2 +- auto-generated/llvm-api-tests/vssseg7e64.c | 2 +- auto-generated/llvm-api-tests/vssseg8e16.c | 2 +- auto-generated/llvm-api-tests/vssseg8e32.c | 2 +- auto-generated/llvm-api-tests/vssseg8e64.c | 2 +- auto-generated/llvm-api-tests/vsuxei16.c | 2 +- auto-generated/llvm-api-tests/vsuxei32.c | 2 +- auto-generated/llvm-api-tests/vsuxei64.c | 2 +- auto-generated/llvm-api-tests/vsuxei8.c | 2 +- auto-generated/llvm-api-tests/vsuxseg2ei16.c | 2 +- auto-generated/llvm-api-tests/vsuxseg2ei32.c | 2 +- auto-generated/llvm-api-tests/vsuxseg2ei64.c | 2 +- auto-generated/llvm-api-tests/vsuxseg2ei8.c | 2 +- auto-generated/llvm-api-tests/vsuxseg3ei16.c | 2 +- auto-generated/llvm-api-tests/vsuxseg3ei32.c | 2 +- auto-generated/llvm-api-tests/vsuxseg3ei64.c | 2 +- auto-generated/llvm-api-tests/vsuxseg3ei8.c | 2 +- auto-generated/llvm-api-tests/vsuxseg4ei16.c | 2 +- auto-generated/llvm-api-tests/vsuxseg4ei32.c | 2 +- auto-generated/llvm-api-tests/vsuxseg4ei64.c | 2 +- auto-generated/llvm-api-tests/vsuxseg4ei8.c | 2 +- auto-generated/llvm-api-tests/vsuxseg5ei16.c | 2 +- auto-generated/llvm-api-tests/vsuxseg5ei32.c | 2 +- auto-generated/llvm-api-tests/vsuxseg5ei64.c | 2 +- auto-generated/llvm-api-tests/vsuxseg5ei8.c | 2 +- auto-generated/llvm-api-tests/vsuxseg6ei16.c | 2 +- auto-generated/llvm-api-tests/vsuxseg6ei32.c | 2 +- auto-generated/llvm-api-tests/vsuxseg6ei64.c | 2 +- auto-generated/llvm-api-tests/vsuxseg6ei8.c | 2 +- auto-generated/llvm-api-tests/vsuxseg7ei16.c | 2 +- auto-generated/llvm-api-tests/vsuxseg7ei32.c | 2 +- auto-generated/llvm-api-tests/vsuxseg7ei64.c | 2 +- auto-generated/llvm-api-tests/vsuxseg7ei8.c | 2 +- auto-generated/llvm-api-tests/vsuxseg8ei16.c | 2 +- auto-generated/llvm-api-tests/vsuxseg8ei32.c | 2 +- auto-generated/llvm-api-tests/vsuxseg8ei64.c | 2 +- auto-generated/llvm-api-tests/vsuxseg8ei8.c | 2 +- auto-generated/llvm-api-tests/vundefined.c | 2 +- auto-generated/llvm-api-tests/vwmacc.c | 2 +- auto-generated/llvm-api-tests/vwmaccsu.c | 2 +- auto-generated/llvm-api-tests/vwmaccu.c | 2 +- auto-generated/llvm-api-tests/vwmaccus.c | 2 +- auto-generated/llvm-overloaded-tests/vcompress.c | 2 +- auto-generated/llvm-overloaded-tests/vcpop.c | 2 +- auto-generated/llvm-overloaded-tests/vfabs.c | 2 +- auto-generated/llvm-overloaded-tests/vfadd.c | 2 +- auto-generated/llvm-overloaded-tests/vfclass.c | 2 +- auto-generated/llvm-overloaded-tests/vfcvt.c | 2 +- auto-generated/llvm-overloaded-tests/vfcvt_rtz.c | 2 +- auto-generated/llvm-overloaded-tests/vfdiv.c | 2 +- auto-generated/llvm-overloaded-tests/vfmacc.c | 2 +- auto-generated/llvm-overloaded-tests/vfmadd.c | 2 +- auto-generated/llvm-overloaded-tests/vfmax.c | 2 +- auto-generated/llvm-overloaded-tests/vfmerge.c | 2 +- auto-generated/llvm-overloaded-tests/vfmin.c | 2 +- auto-generated/llvm-overloaded-tests/vfmsac.c | 2 +- auto-generated/llvm-overloaded-tests/vfmsub.c | 2 +- auto-generated/llvm-overloaded-tests/vfmul.c | 2 +- auto-generated/llvm-overloaded-tests/vfmv.c | 2 +- auto-generated/llvm-overloaded-tests/vfncvt.c | 2 +- auto-generated/llvm-overloaded-tests/vfncvt_rod.c | 2 +- auto-generated/llvm-overloaded-tests/vfncvt_rtz.c | 2 +- auto-generated/llvm-overloaded-tests/vfneg.c | 2 +- auto-generated/llvm-overloaded-tests/vfnmacc.c | 2 +- auto-generated/llvm-overloaded-tests/vfnmadd.c | 2 +- auto-generated/llvm-overloaded-tests/vfnmsac.c | 2 +- auto-generated/llvm-overloaded-tests/vfnmsub.c | 2 +- auto-generated/llvm-overloaded-tests/vfrdiv.c | 2 +- auto-generated/llvm-overloaded-tests/vfrec7.c | 2 +- auto-generated/llvm-overloaded-tests/vfredmax.c | 2 +- auto-generated/llvm-overloaded-tests/vfredmin.c | 2 +- auto-generated/llvm-overloaded-tests/vfredosum.c | 2 +- auto-generated/llvm-overloaded-tests/vfredusum.c | 2 +- auto-generated/llvm-overloaded-tests/vfrsqrt7.c | 2 +- auto-generated/llvm-overloaded-tests/vfrsub.c | 2 +- auto-generated/llvm-overloaded-tests/vfsgnj.c | 2 +- auto-generated/llvm-overloaded-tests/vfsgnjn.c | 2 +- auto-generated/llvm-overloaded-tests/vfsgnjx.c | 2 +- auto-generated/llvm-overloaded-tests/vfslide1down.c | 2 +- auto-generated/llvm-overloaded-tests/vfslide1up.c | 2 +- auto-generated/llvm-overloaded-tests/vfsqrt.c | 2 +- auto-generated/llvm-overloaded-tests/vfsub.c | 2 +- auto-generated/llvm-overloaded-tests/vfwadd.c | 2 +- auto-generated/llvm-overloaded-tests/vfwcvt.c | 2 +- auto-generated/llvm-overloaded-tests/vfwcvt_rtz.c | 2 +- auto-generated/llvm-overloaded-tests/vfwmacc.c | 2 +- auto-generated/llvm-overloaded-tests/vfwmsac.c | 2 +- auto-generated/llvm-overloaded-tests/vfwmul.c | 2 +- auto-generated/llvm-overloaded-tests/vfwnmacc.c | 2 +- auto-generated/llvm-overloaded-tests/vfwnmsac.c | 2 +- auto-generated/llvm-overloaded-tests/vfwredosum.c | 2 +- auto-generated/llvm-overloaded-tests/vfwredusum.c | 2 +- auto-generated/llvm-overloaded-tests/vfwsub.c | 2 +- auto-generated/llvm-overloaded-tests/vget.c | 2 +- auto-generated/llvm-overloaded-tests/vle16.c | 2 +- auto-generated/llvm-overloaded-tests/vle16ff.c | 2 +- auto-generated/llvm-overloaded-tests/vle32.c | 2 +- auto-generated/llvm-overloaded-tests/vle32ff.c | 2 +- auto-generated/llvm-overloaded-tests/vle64.c | 2 +- auto-generated/llvm-overloaded-tests/vle64ff.c | 2 +- auto-generated/llvm-overloaded-tests/vle8.c | 2 +- auto-generated/llvm-overloaded-tests/vle8ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlmul_ext_v.c | 2 +- auto-generated/llvm-overloaded-tests/vlmul_trunc_v.c | 2 +- auto-generated/llvm-overloaded-tests/vloxei16.c | 2 +- auto-generated/llvm-overloaded-tests/vloxei32.c | 2 +- auto-generated/llvm-overloaded-tests/vloxei64.c | 2 +- auto-generated/llvm-overloaded-tests/vloxei8.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg2ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg2ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg2ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg2ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg3ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg3ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg3ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg3ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg4ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg4ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg4ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg4ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg5ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg5ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg5ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg5ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg6ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg6ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg6ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg6ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg7ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg7ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg7ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg7ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg8ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg8ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg8ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vloxseg8ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vlse16.c | 2 +- auto-generated/llvm-overloaded-tests/vlse32.c | 2 +- auto-generated/llvm-overloaded-tests/vlse64.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg2e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg2e16ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg2e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg2e32ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg2e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg2e64ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg2e8ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg3e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg3e16ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg3e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg3e32ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg3e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg3e64ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg3e8ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg4e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg4e16ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg4e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg4e32ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg4e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg4e64ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg4e8ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg5e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg5e16ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg5e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg5e32ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg5e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg5e64ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg5e8ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg6e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg6e16ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg6e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg6e32ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg6e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg6e64ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg6e8ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg7e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg7e16ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg7e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg7e32ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg7e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg7e64ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg7e8ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg8e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg8e16ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg8e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg8e32ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg8e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg8e64ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlseg8e8ff.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg2e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg2e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg2e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg3e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg3e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg3e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg4e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg4e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg4e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg5e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg5e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg5e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg6e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg6e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg6e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg7e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg7e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg7e64.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg8e16.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg8e32.c | 2 +- auto-generated/llvm-overloaded-tests/vlsseg8e64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxei16.c | 2 +- auto-generated/llvm-overloaded-tests/vluxei32.c | 2 +- auto-generated/llvm-overloaded-tests/vluxei64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxei8.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg2ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg2ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg2ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg2ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg3ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg3ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg3ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg3ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg4ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg4ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg4ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg4ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg5ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg5ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg5ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg5ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg6ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg6ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg6ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg6ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg7ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg7ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg7ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg7ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg8ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg8ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg8ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vluxseg8ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vmacc.c | 2 +- auto-generated/llvm-overloaded-tests/vmadd.c | 2 +- auto-generated/llvm-overloaded-tests/vmerge.c | 2 +- auto-generated/llvm-overloaded-tests/vmfeq.c | 2 +- auto-generated/llvm-overloaded-tests/vmfge.c | 2 +- auto-generated/llvm-overloaded-tests/vmfgt.c | 2 +- auto-generated/llvm-overloaded-tests/vmfle.c | 2 +- auto-generated/llvm-overloaded-tests/vmflt.c | 2 +- auto-generated/llvm-overloaded-tests/vmfne.c | 2 +- auto-generated/llvm-overloaded-tests/vmmv.c | 2 +- auto-generated/llvm-overloaded-tests/vmseq.c | 2 +- auto-generated/llvm-overloaded-tests/vmsge.c | 2 +- auto-generated/llvm-overloaded-tests/vmsgeu.c | 2 +- auto-generated/llvm-overloaded-tests/vmsgt.c | 2 +- auto-generated/llvm-overloaded-tests/vmsgtu.c | 2 +- auto-generated/llvm-overloaded-tests/vmsle.c | 2 +- auto-generated/llvm-overloaded-tests/vmsleu.c | 2 +- auto-generated/llvm-overloaded-tests/vmslt.c | 2 +- auto-generated/llvm-overloaded-tests/vmsltu.c | 2 +- auto-generated/llvm-overloaded-tests/vmsne.c | 2 +- auto-generated/llvm-overloaded-tests/vmv.c | 2 +- auto-generated/llvm-overloaded-tests/vneg.c | 2 +- auto-generated/llvm-overloaded-tests/vnmsac.c | 2 +- auto-generated/llvm-overloaded-tests/vnmsub.c | 2 +- auto-generated/llvm-overloaded-tests/vreinterpret.c | 2 +- auto-generated/llvm-overloaded-tests/vrgather.c | 2 +- auto-generated/llvm-overloaded-tests/vrgatherei16.c | 2 +- auto-generated/llvm-overloaded-tests/vse16.c | 2 +- auto-generated/llvm-overloaded-tests/vse32.c | 2 +- auto-generated/llvm-overloaded-tests/vse64.c | 2 +- auto-generated/llvm-overloaded-tests/vset.c | 2 +- auto-generated/llvm-overloaded-tests/vslidedown.c | 2 +- auto-generated/llvm-overloaded-tests/vslideup.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg2ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg2ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg2ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg2ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg3ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg3ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg3ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg3ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg4ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg4ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg4ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg4ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg5ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg5ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg5ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg5ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg6ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg6ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg6ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg6ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg7ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg7ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg7ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg7ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg8ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg8ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg8ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsoxseg8ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsse16.c | 2 +- auto-generated/llvm-overloaded-tests/vsse32.c | 2 +- auto-generated/llvm-overloaded-tests/vsse64.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg2e16.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg2e32.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg2e64.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg3e16.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg3e32.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg3e64.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg4e16.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg4e32.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg4e64.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg5e16.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg5e32.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg5e64.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg6e16.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg6e32.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg6e64.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg7e16.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg7e32.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg7e64.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg8e16.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg8e32.c | 2 +- auto-generated/llvm-overloaded-tests/vsseg8e64.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg2e16.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg2e32.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg2e64.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg3e16.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg3e32.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg3e64.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg4e16.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg4e32.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg4e64.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg5e16.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg5e32.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg5e64.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg6e16.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg6e32.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg6e64.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg7e16.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg7e32.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg7e64.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg8e16.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg8e32.c | 2 +- auto-generated/llvm-overloaded-tests/vssseg8e64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg2ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg2ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg2ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg2ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg3ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg3ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg3ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg3ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg4ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg4ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg4ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg4ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg5ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg5ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg5ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg5ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg6ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg6ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg6ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg6ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg7ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg7ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg7ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg7ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg8ei16.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg8ei32.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg8ei64.c | 2 +- auto-generated/llvm-overloaded-tests/vsuxseg8ei8.c | 2 +- auto-generated/llvm-overloaded-tests/vwmacc.c | 2 +- auto-generated/llvm-overloaded-tests/vwmaccsu.c | 2 +- auto-generated/llvm-overloaded-tests/vwmaccu.c | 2 +- auto-generated/llvm-overloaded-tests/vwmaccus.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vcompress.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfabs.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfadd.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfclass.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfcvt.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfdiv.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmacc.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmadd.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmax.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmerge.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmin.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmsac.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmsub.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmul.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfmv.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfncvt.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfneg.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfrec7.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfredmax.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfredmin.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfredosum.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfredusum.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfrsub.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfsub.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwadd.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwmul.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vfwsub.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vle16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vle16ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vle32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vle32ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vle64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vle64ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vle8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vle8ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlse16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlse32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlse64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmacc.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmadd.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmerge.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmfeq.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmfge.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmfgt.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmfle.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmflt.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmfne.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmseq.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmsge.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmsgt.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmsle.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmsleu.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmslt.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmsltu.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmsne.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vmv.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vneg.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vnmsac.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vnmsub.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vrgather.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vslidedown.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vslideup.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vwmacc.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c | 2 +- auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vcompress.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfabs.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfadd.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfclass.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt_rtz.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfdiv.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmacc.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmadd.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmax.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmerge.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmin.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmsac.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmsub.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmul.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfmv.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vfncvt_rod.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vfncvt_rtz.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfneg.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfnmacc.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfnmadd.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsac.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsub.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfrdiv.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfrec7.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfredmax.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfredmin.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfredosum.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfredusum.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfrsqrt7.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfrsub.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnj.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjn.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjx.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vfslide1down.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vfslide1up.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfsqrt.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfsub.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfwadd.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vfwcvt_rtz.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfwmacc.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfwmsac.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfwmul.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmacc.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmsac.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vfwredosum.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vfwredusum.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vfwsub.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vle16.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vle16ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vle32.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vle32ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vle64.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vle64ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vle8.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vle8ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vloxei16.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vloxei32.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vloxei64.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vloxei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg2ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg2ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg2ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg3ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg3ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg3ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg4ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg4ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg4ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg5ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg5ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg5ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg6ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg6ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg6ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg7ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg7ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg7ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg8ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg8ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vloxseg8ei8.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlse16.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlse32.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlse64.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg2e32ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg2e64ff.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg2e8ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg3e32ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg3e64ff.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg3e8ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg4e32ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg4e64ff.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg4e8ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg5e32ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg5e64ff.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg5e8ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg6e32ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg6e64ff.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg6e8ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg7e32ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg7e64ff.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg7e8ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg8e32ff.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg8e64ff.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlseg8e8ff.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg2e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg2e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg2e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg3e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg3e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg3e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg4e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg4e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg4e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg5e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg5e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg5e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg6e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg6e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg6e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg7e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg7e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg7e64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg8e16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg8e32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vlsseg8e64.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vluxei16.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vluxei32.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vluxei64.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vluxei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg2ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg2ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg2ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg3ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg3ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg3ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg4ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg4ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg4ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg5ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg5ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg5ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg6ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg6ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg6ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg7ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg7ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg7ei8.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg8ei32.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg8ei64.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vluxseg8ei8.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmacc.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmadd.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmerge.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmfeq.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmfge.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmfgt.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmfle.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmflt.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmfne.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmseq.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmsge.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmsgeu.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmsgt.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmsgtu.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmsle.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmsleu.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmslt.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmsltu.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmsne.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vmv.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vneg.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vnmsac.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vnmsub.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vrgather.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vrgatherei16.c | 2 +- .../policy_funcs/llvm-overloaded-tests/vslidedown.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vslideup.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vwmacc.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccsu.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccu.c | 2 +- auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccus.c | 2 +- 1392 files changed, 1640 insertions(+), 1640 deletions(-) diff --git a/auto-generated/bfloat16/llvm-api-tests/vcreate.c b/auto-generated/bfloat16/llvm-api-tests/vcreate.c index 5bee817b5..1817fe2fa 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vcreate.c +++ b/auto-generated/bfloat16/llvm-api-tests/vcreate.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c index 1b63183e2..8aaabc00a 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfncvtbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c index 14fed192d..eac92fe54 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfwcvtbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c index e012e6146..d1a8f0987 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfwmaccbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vget.c b/auto-generated/bfloat16/llvm-api-tests/vget.c index 8c7096ec2..ec87eaaa9 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vget.c +++ b/auto-generated/bfloat16/llvm-api-tests/vget.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vle16.c b/auto-generated/bfloat16/llvm-api-tests/vle16.c index 875873a93..eccbebef2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vle16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vle16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/llvm-api-tests/vle16ff.c index 7b9ca9702..d65364df8 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vle16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vle16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c b/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c index 016db30c0..dfefe862d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlmul_ext_v.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c b/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c index d0a0519a0..3442c3407 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlmul_trunc_v.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxei16.c index d2e354548..ee46c9333 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c index 82390eb8e..6711e88a3 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c index 24955edb0..befe30698 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c index b12fb2fbb..4df618f2c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c index 285f7f3be..eab69bdd3 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c index f21a83835..599be467f 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c index f255edc8b..022e7c445 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c index f7ee5d636..081e6baea 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vloxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/llvm-api-tests/vlse16.c index 8fc01c254..8d46a5b17 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlse16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c index 23b147817..13d67c05d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c index 54a4edf98..1c1fda35f 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg2e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c index 5a736ae11..a5c119c76 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c index 9f99544e7..40fc816b4 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg3e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c index 9286edcae..cd9ec8571 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c index ecbea325e..42c289730 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg4e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c index 5640889e3..4bb145cff 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c index 5991ba812..a7e0b5bb9 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg5e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c index 70ff93569..fda3bb83e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c index 9703905a7..39b0d80e4 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg6e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c index 414cefbb7..c19b11c0d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c index 972a04ba6..e49d868e5 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg7e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c index d47c9997f..64efba08e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c index a21065727..94e2e8ab8 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlseg8e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c index c751979eb..18feb74f4 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c index 3079f01a4..7f72369a7 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c index 521a65015..39843fda2 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c index d54014f0e..08455a058 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c index 5a392a834..6ac7d3d7c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c index bbde68805..cb76097ae 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c index 2b071f52e..b74c31575 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vlsseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxei16.c index 100df4d39..453ed6312 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c index db172b75e..ada9b0f4a 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c index fc0a1357c..8e8fa7459 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c index a2c52c77f..e8721c4dc 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c index 1ea269f4c..7251bd57d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c index 6036ec2fe..d1bc20059 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c index f12742e52..8976004be 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c index 9f83601f1..035146ca5 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vluxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c index df0b40f8b..1ea35ebe4 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c +++ b/auto-generated/bfloat16/llvm-api-tests/vreinterpret.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vse16.c b/auto-generated/bfloat16/llvm-api-tests/vse16.c index 322d8ad07..ef8439b13 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vse16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vset.c b/auto-generated/bfloat16/llvm-api-tests/vset.c index b8ca7d76d..34eb99083 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vset.c +++ b/auto-generated/bfloat16/llvm-api-tests/vset.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c index 451c88117..f68526908 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c index 1c25a7306..0fc04e2f9 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c index 540e4e68c..fc7f7d9a0 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c index b57b09633..f1afd3b0b 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c index 8f25b6940..01a77f055 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c index 5e01fdfea..c49cd7df9 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c index ab7ffcf52..b48bf2b3b 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c index 0cc2ca54c..b8157f46e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsoxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsse16.c b/auto-generated/bfloat16/llvm-api-tests/vsse16.c index 3d9833b87..a202494b7 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsse16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsse16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c index 3aadce048..ed102d645 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c index 33bc2410c..b6e0c0949 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c index cf651d433..027169c27 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c index 92f7b4b43..84153c5de 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c index efdb7d290..42e14cbc3 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c index 633461d2a..994898a13 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c index 0c4a8110c..a1e32450e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c index c4949ac8d..e2f83e96a 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c index ca7542e17..c787bae52 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c index 6e5fda871..a8cd816a0 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c index 1baa6d6ff..8f1c52beb 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c index d358d0067..0ba0d33a7 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c index b63482059..4da8d8681 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c b/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c index 4fc42633e..ed3086975 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vssseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c index b756de688..fef398b4d 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c index 6bff93d68..5e0b0c230 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c index 266ba9ab2..d85ff55de 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c index d284218bc..24c1dd215 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c index 567c43a02..826f4408c 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c index dff8843a5..c0fbb6b1b 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c index e4e9e86f7..e57536fb8 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c b/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c index f23db8ab0..39b3182fd 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-api-tests/vsuxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vundefined.c b/auto-generated/bfloat16/llvm-api-tests/vundefined.c index a5bf8ce56..05d5adb95 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vundefined.c +++ b/auto-generated/bfloat16/llvm-api-tests/vundefined.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c index 0dce388cd..91ddb1751 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfncvtbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c index fe7ecb30d..cf1c7f9de 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfwcvtbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c index 1177ef063..1cc4b64ba 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vfwmaccbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vget.c b/auto-generated/bfloat16/llvm-overloaded-tests/vget.c index 338c8c6d5..f208bb712 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vget.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vget.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c index 8571e4566..6d43b56a7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vle16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c index d7267eab2..65456faad 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vle16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c index 2d8cb4387..3ec3b27f3 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_ext_v.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c index 4efaaf438..cac4f8cc3 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlmul_trunc_v.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c index a42f9894d..69ce719fa 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c index bdb4e561c..44411bcae 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c index cb767ad64..3ec919166 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c index bcd5508d9..00ecbd5b7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c index 4612825fb..955414f1a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c index ce78b6255..e6efc771e 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c index e491872f7..74198b68b 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c index ef8b13a94..d3e550ad2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vloxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c index 80023ae79..00a13d537 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlse16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c index 612b250cc..b4dc477e0 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c index 0fd4bae02..39e46bcff 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg2e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c index d215785e7..bd881378c 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c index 5b750a20a..cdb1e223a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg3e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c index 2863e915b..d61b79ca9 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c index f4d7235d2..ec96a4047 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg4e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c index 5eea8bba4..725dd8643 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c index b73e3c519..d8c7f5ced 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg5e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c index 76cda4f5f..d0c0be670 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c index f063eb6cc..7e34d9b5e 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg6e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c index a09537ff6..232af118b 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c index 7455290f8..849b63b6f 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg7e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c index 3b41fd8fb..45b44469a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c index 6d31e52db..a54b6e0df 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlseg8e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c index 4eb1854a1..9c6158897 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c index e16c9112e..f592f50a7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c index 43958d69e..92d5e2b5a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c index 39163ff7d..3433919b2 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c index c99545243..ef3150157 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c index 78db84c06..a2be41a5a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c index 31c5ad044..f4f02c887 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vlsseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c index fe4609d5d..dbbb21925 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c index fceca59f1..66ff13901 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c index 6f3335875..4ad199b43 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c index 8193c2ec2..ff373110a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c index cf208dcb0..26d30bbcd 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c index 72fcdf884..7ff324c96 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c index fdb447eb7..8567e4eb0 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c index f73adf8a3..c58ee1ee3 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vluxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c index d72de04e2..3ed1b2791 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vreinterpret.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c index e96601610..7458b0fbf 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vse16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vset.c b/auto-generated/bfloat16/llvm-overloaded-tests/vset.c index 64c570d71..e5de51a6d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vset.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vset.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c index 53c2a50a2..6c3065418 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c index 1a50e3145..bf48ad80d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c index d9f420ea8..0c8a722fa 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c index 5f6138034..7ce48db47 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c index 1d7a8e6e5..b2873fc30 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c index bfec2ad54..adacd0f6d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c index 9748e3033..70c95852e 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c index b542d6c5b..43dfb6b48 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsoxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c index e2f381ece..4fee05466 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsse16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c index 0134205f2..4687d6301 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c index 384f7bbe9..4e191bdfa 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c index 9c228977a..82d027fa7 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c index d902e0fd8..b1bad6227 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c index c2b484a82..1cbb02f40 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c index 58684aa5a..d1bba335b 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c index 73df0a904..bda6e7050 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c index ab743e95c..d81dc61f1 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c index a590c8c22..b3ac44719 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c index 08851d114..25960d3de 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c index 2b971cad2..b4104cb7d 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c index 6094a05fd..06f50a29b 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c index 547a4b544..4078be92a 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c index 29bdfb2da..6e3d91f21 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vssseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c index cb48003e1..49414ce68 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c index f53fdd0a9..f069eee67 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c index 45ba0a1e5..8e2ff49bf 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c index 90b589259..92fb63fce 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c index 34d92b164..2080f8517 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c index e2110fcf8..a47d53ef1 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c index 0c5fd93fb..d1ea14ab4 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c index f8e7c6613..c0a23fde3 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vsuxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c index ea509f839..a5a06f358 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfncvtbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c index 31f4e80c7..561d3233a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwcvtbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c index c031bb190..5375cf6de 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfwmaccbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c index 9ad0dd194..47104da04 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c index f1f7aa8ca..210e224f7 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vle16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c index 9562f56ed..8609a6866 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c index f101015d0..76a1f0dd3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c index ede85340d..0e7deb619 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c index 269e0443c..0a4ff1b91 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c index 779ec3de4..22051b5bb 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c index a07a088ed..464db0e1c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c index 7580f6f81..b2ed153d6 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c index d018c0492..9834b9540 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vloxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c index 2a80ce137..03440c818 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlse16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c index 53c98a351..c06a4e3e9 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c index 691547fb2..5d6ba8a67 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg2e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c index 319bb5951..d34cdd6fb 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c index a204d3cc3..a9ee87858 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg3e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c index d0e04a9b1..b98db339b 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c index b33286ebf..4f5c81117 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg4e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c index 38d02f7df..10fc525e8 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c index c8d063a68..152bbf79a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg5e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c index 1c251e404..30022989d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c index be16241b2..8162270f6 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg6e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c index 900b6b734..4dc02df09 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c index c478a9b09..9be0d7480 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg7e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c index d2d7bc638..56f4d3c4b 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c index d03a08b80..20ad03e7d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlseg8e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c index e0a2ecf37..8d6192e4d 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c index 16db3c084..68b3102ff 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c index f6bea1a7c..b9597ec05 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c index 0a5c27341..35de70c99 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c index d6c2c7dfe..4250d7c05 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c index de18ed2a5..c355f95e3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c index fb6f0c128..7c450c4f5 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vlsseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c index d51db1c09..c75a51e4f 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c index f8d25ee01..dc7426b69 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c index 4d5b83c5d..339f4bcd8 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c index a6fd8bb87..42fcd4725 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c index 7daa96e44..e82ca2299 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c index 15b02a519..09c61672a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c index dbabacce4..4c0ad1e09 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c index c11093d1a..cbe595cae 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vluxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c index f7c8df74a..6d0610085 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfncvtbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c index 6af9f4bd2..8cd98cc9b 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwcvtbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c index 1a2b5d98c..962e7237c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfwmaccbf16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c index 1678cf93b..02fccbe25 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c index 54cf2a926..73fc4c812 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vle16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c index 9acdddb89..93488af5a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c index 48f32828e..f685dec38 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c index 1bb001134..ba9e4c78e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c index 9bf27d95c..612d1efc9 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c index 949463275..c35e9c440 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c index 218c4a634..24fa36ca6 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c index eafe8b250..8bc7c4719 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c index 03c183a14..f9a208c88 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c index d34157204..105964343 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlse16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c index 6f5d9528a..1eae1e523 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c index 3615486da..13eacb312 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c index 4b8a5f935..aa822781f 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c index e79a0e026..8734d698a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c index 1414f0d6d..47d8eb1af 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c index 894ff6b77..0591f7fb9 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c index 997e027e7..acccc223a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c index e36634022..717f8ca98 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c index fdeff17bd..ee452fbb9 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c index 795ad0c5b..20426758b 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c index 2e2cbb8a3..4a9a0a998 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c index 4ece05100..9d38a68f9 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c index 84c0e8569..a0f4dfcda 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c index 9f4a07daa..b6672e992 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c index 5d8f0db54..202d8bb91 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c index 4d274ff7e..afd8f84a4 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c index c2f9820f8..7a4f05726 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c index fb02a1ece..37a1b3943 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c index c1809f985..0a9e457ac 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c index 114222500..1b5d521aa 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c index 9da589eb9..976543d10 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c index 9c9bc8053..8170d1bc3 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c index 8d9943c0e..c76a3fcac 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c index 72336389a..f15d79531 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c index 0c80732fb..0ace96fd0 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c index 715e33139..da9c77b0e 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c index f41308f31..6cf4e665c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c index 9d444118e..c4233c947 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c index 3a419dbea..d38a2b17c 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vcompress.c b/auto-generated/llvm-api-tests/vcompress.c index c1b853636..66ba7f945 100644 --- a/auto-generated/llvm-api-tests/vcompress.c +++ b/auto-generated/llvm-api-tests/vcompress.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vcpop.c b/auto-generated/llvm-api-tests/vcpop.c index 1c0ff63ca..f9eab4847 100644 --- a/auto-generated/llvm-api-tests/vcpop.c +++ b/auto-generated/llvm-api-tests/vcpop.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vcreate.c b/auto-generated/llvm-api-tests/vcreate.c index eb210bb82..b4e93b138 100644 --- a/auto-generated/llvm-api-tests/vcreate.c +++ b/auto-generated/llvm-api-tests/vcreate.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfabs.c b/auto-generated/llvm-api-tests/vfabs.c index 32c56c5d6..adced534c 100644 --- a/auto-generated/llvm-api-tests/vfabs.c +++ b/auto-generated/llvm-api-tests/vfabs.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfadd.c b/auto-generated/llvm-api-tests/vfadd.c index 1e7c1f7bb..1b2b16922 100644 --- a/auto-generated/llvm-api-tests/vfadd.c +++ b/auto-generated/llvm-api-tests/vfadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfclass.c b/auto-generated/llvm-api-tests/vfclass.c index b08fe000d..541c9417c 100644 --- a/auto-generated/llvm-api-tests/vfclass.c +++ b/auto-generated/llvm-api-tests/vfclass.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfcvt.c b/auto-generated/llvm-api-tests/vfcvt.c index bd4ba3848..cd9786880 100644 --- a/auto-generated/llvm-api-tests/vfcvt.c +++ b/auto-generated/llvm-api-tests/vfcvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfcvt_rtz.c b/auto-generated/llvm-api-tests/vfcvt_rtz.c index 161cd61ed..bc24465e3 100644 --- a/auto-generated/llvm-api-tests/vfcvt_rtz.c +++ b/auto-generated/llvm-api-tests/vfcvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfdiv.c b/auto-generated/llvm-api-tests/vfdiv.c index 22e15199a..6f8e8f0d8 100644 --- a/auto-generated/llvm-api-tests/vfdiv.c +++ b/auto-generated/llvm-api-tests/vfdiv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmacc.c b/auto-generated/llvm-api-tests/vfmacc.c index d62f00134..a20dbb446 100644 --- a/auto-generated/llvm-api-tests/vfmacc.c +++ b/auto-generated/llvm-api-tests/vfmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmadd.c b/auto-generated/llvm-api-tests/vfmadd.c index a15b7b458..171fdd960 100644 --- a/auto-generated/llvm-api-tests/vfmadd.c +++ b/auto-generated/llvm-api-tests/vfmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmax.c b/auto-generated/llvm-api-tests/vfmax.c index 4ab927dce..a8417714a 100644 --- a/auto-generated/llvm-api-tests/vfmax.c +++ b/auto-generated/llvm-api-tests/vfmax.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmerge.c b/auto-generated/llvm-api-tests/vfmerge.c index b5c61ce53..84e03c8d9 100644 --- a/auto-generated/llvm-api-tests/vfmerge.c +++ b/auto-generated/llvm-api-tests/vfmerge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmin.c b/auto-generated/llvm-api-tests/vfmin.c index 99824aa6d..b71a75355 100644 --- a/auto-generated/llvm-api-tests/vfmin.c +++ b/auto-generated/llvm-api-tests/vfmin.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmsac.c b/auto-generated/llvm-api-tests/vfmsac.c index c98410bfd..8a8184bb7 100644 --- a/auto-generated/llvm-api-tests/vfmsac.c +++ b/auto-generated/llvm-api-tests/vfmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmsub.c b/auto-generated/llvm-api-tests/vfmsub.c index fbf8eb409..abdd2b4ba 100644 --- a/auto-generated/llvm-api-tests/vfmsub.c +++ b/auto-generated/llvm-api-tests/vfmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmul.c b/auto-generated/llvm-api-tests/vfmul.c index 7a8f62b2f..9ced5cb86 100644 --- a/auto-generated/llvm-api-tests/vfmul.c +++ b/auto-generated/llvm-api-tests/vfmul.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfmv.c b/auto-generated/llvm-api-tests/vfmv.c index 00bbac070..eaebfe803 100644 --- a/auto-generated/llvm-api-tests/vfmv.c +++ b/auto-generated/llvm-api-tests/vfmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfncvt.c b/auto-generated/llvm-api-tests/vfncvt.c index 13ddea146..34cf539ba 100644 --- a/auto-generated/llvm-api-tests/vfncvt.c +++ b/auto-generated/llvm-api-tests/vfncvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfncvt_rod.c b/auto-generated/llvm-api-tests/vfncvt_rod.c index c9d4d43a2..37d976494 100644 --- a/auto-generated/llvm-api-tests/vfncvt_rod.c +++ b/auto-generated/llvm-api-tests/vfncvt_rod.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfncvt_rtz.c b/auto-generated/llvm-api-tests/vfncvt_rtz.c index e16577975..4365aa631 100644 --- a/auto-generated/llvm-api-tests/vfncvt_rtz.c +++ b/auto-generated/llvm-api-tests/vfncvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfneg.c b/auto-generated/llvm-api-tests/vfneg.c index 675185ffb..00f2738b5 100644 --- a/auto-generated/llvm-api-tests/vfneg.c +++ b/auto-generated/llvm-api-tests/vfneg.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfnmacc.c b/auto-generated/llvm-api-tests/vfnmacc.c index 362ffa61f..1604e59e9 100644 --- a/auto-generated/llvm-api-tests/vfnmacc.c +++ b/auto-generated/llvm-api-tests/vfnmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfnmadd.c b/auto-generated/llvm-api-tests/vfnmadd.c index b3a6e6968..f3c611920 100644 --- a/auto-generated/llvm-api-tests/vfnmadd.c +++ b/auto-generated/llvm-api-tests/vfnmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfnmsac.c b/auto-generated/llvm-api-tests/vfnmsac.c index ab69d02d2..ef15e33e6 100644 --- a/auto-generated/llvm-api-tests/vfnmsac.c +++ b/auto-generated/llvm-api-tests/vfnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfnmsub.c b/auto-generated/llvm-api-tests/vfnmsub.c index 3db7b157b..06d555eea 100644 --- a/auto-generated/llvm-api-tests/vfnmsub.c +++ b/auto-generated/llvm-api-tests/vfnmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfrdiv.c b/auto-generated/llvm-api-tests/vfrdiv.c index 148f0aab3..83d568a8c 100644 --- a/auto-generated/llvm-api-tests/vfrdiv.c +++ b/auto-generated/llvm-api-tests/vfrdiv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfrec7.c b/auto-generated/llvm-api-tests/vfrec7.c index 1eee5ca98..63457e869 100644 --- a/auto-generated/llvm-api-tests/vfrec7.c +++ b/auto-generated/llvm-api-tests/vfrec7.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfredmax.c b/auto-generated/llvm-api-tests/vfredmax.c index 6c589e7f8..99a66b6a6 100644 --- a/auto-generated/llvm-api-tests/vfredmax.c +++ b/auto-generated/llvm-api-tests/vfredmax.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfredmin.c b/auto-generated/llvm-api-tests/vfredmin.c index 452e21300..2b86c411b 100644 --- a/auto-generated/llvm-api-tests/vfredmin.c +++ b/auto-generated/llvm-api-tests/vfredmin.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfredosum.c b/auto-generated/llvm-api-tests/vfredosum.c index 4c814c631..629b5a700 100644 --- a/auto-generated/llvm-api-tests/vfredosum.c +++ b/auto-generated/llvm-api-tests/vfredosum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfredusum.c b/auto-generated/llvm-api-tests/vfredusum.c index c9a14d21c..8e508fa30 100644 --- a/auto-generated/llvm-api-tests/vfredusum.c +++ b/auto-generated/llvm-api-tests/vfredusum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfrsqrt7.c b/auto-generated/llvm-api-tests/vfrsqrt7.c index a8f842630..97ff9d763 100644 --- a/auto-generated/llvm-api-tests/vfrsqrt7.c +++ b/auto-generated/llvm-api-tests/vfrsqrt7.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfrsub.c b/auto-generated/llvm-api-tests/vfrsub.c index 45b5b9be0..130152a86 100644 --- a/auto-generated/llvm-api-tests/vfrsub.c +++ b/auto-generated/llvm-api-tests/vfrsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfsgnj.c b/auto-generated/llvm-api-tests/vfsgnj.c index bf86ab1f7..c7c14646d 100644 --- a/auto-generated/llvm-api-tests/vfsgnj.c +++ b/auto-generated/llvm-api-tests/vfsgnj.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfsgnjn.c b/auto-generated/llvm-api-tests/vfsgnjn.c index ea4687731..7d452ebfa 100644 --- a/auto-generated/llvm-api-tests/vfsgnjn.c +++ b/auto-generated/llvm-api-tests/vfsgnjn.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfsgnjx.c b/auto-generated/llvm-api-tests/vfsgnjx.c index d46b43734..1e116f4c0 100644 --- a/auto-generated/llvm-api-tests/vfsgnjx.c +++ b/auto-generated/llvm-api-tests/vfsgnjx.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfslide1down.c b/auto-generated/llvm-api-tests/vfslide1down.c index 114953471..fbb42b5f4 100644 --- a/auto-generated/llvm-api-tests/vfslide1down.c +++ b/auto-generated/llvm-api-tests/vfslide1down.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfslide1up.c b/auto-generated/llvm-api-tests/vfslide1up.c index e2cc32def..7a38df8e4 100644 --- a/auto-generated/llvm-api-tests/vfslide1up.c +++ b/auto-generated/llvm-api-tests/vfslide1up.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfsqrt.c b/auto-generated/llvm-api-tests/vfsqrt.c index 78ed79818..cbe3068c2 100644 --- a/auto-generated/llvm-api-tests/vfsqrt.c +++ b/auto-generated/llvm-api-tests/vfsqrt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfsub.c b/auto-generated/llvm-api-tests/vfsub.c index fa17a9e17..6bd3ae2b1 100644 --- a/auto-generated/llvm-api-tests/vfsub.c +++ b/auto-generated/llvm-api-tests/vfsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwadd.c b/auto-generated/llvm-api-tests/vfwadd.c index 41c9259bd..d12d482bb 100644 --- a/auto-generated/llvm-api-tests/vfwadd.c +++ b/auto-generated/llvm-api-tests/vfwadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwcvt.c b/auto-generated/llvm-api-tests/vfwcvt.c index e890f795c..eab0aec1c 100644 --- a/auto-generated/llvm-api-tests/vfwcvt.c +++ b/auto-generated/llvm-api-tests/vfwcvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwcvt_rtz.c b/auto-generated/llvm-api-tests/vfwcvt_rtz.c index 0989e272e..c906df136 100644 --- a/auto-generated/llvm-api-tests/vfwcvt_rtz.c +++ b/auto-generated/llvm-api-tests/vfwcvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwmacc.c b/auto-generated/llvm-api-tests/vfwmacc.c index 6c033ed24..452a5d5ab 100644 --- a/auto-generated/llvm-api-tests/vfwmacc.c +++ b/auto-generated/llvm-api-tests/vfwmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwmsac.c b/auto-generated/llvm-api-tests/vfwmsac.c index 13dac6399..9e56700a0 100644 --- a/auto-generated/llvm-api-tests/vfwmsac.c +++ b/auto-generated/llvm-api-tests/vfwmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwmul.c b/auto-generated/llvm-api-tests/vfwmul.c index e9356f26e..411561c07 100644 --- a/auto-generated/llvm-api-tests/vfwmul.c +++ b/auto-generated/llvm-api-tests/vfwmul.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwnmacc.c b/auto-generated/llvm-api-tests/vfwnmacc.c index 1dc428733..36bcae2a8 100644 --- a/auto-generated/llvm-api-tests/vfwnmacc.c +++ b/auto-generated/llvm-api-tests/vfwnmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwnmsac.c b/auto-generated/llvm-api-tests/vfwnmsac.c index 558bc3f02..565516d0e 100644 --- a/auto-generated/llvm-api-tests/vfwnmsac.c +++ b/auto-generated/llvm-api-tests/vfwnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwredosum.c b/auto-generated/llvm-api-tests/vfwredosum.c index e0435cb6f..15558ff50 100644 --- a/auto-generated/llvm-api-tests/vfwredosum.c +++ b/auto-generated/llvm-api-tests/vfwredosum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwredusum.c b/auto-generated/llvm-api-tests/vfwredusum.c index 805f803e7..030a39d1b 100644 --- a/auto-generated/llvm-api-tests/vfwredusum.c +++ b/auto-generated/llvm-api-tests/vfwredusum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vfwsub.c b/auto-generated/llvm-api-tests/vfwsub.c index 4ea5afc1e..25d5cd882 100644 --- a/auto-generated/llvm-api-tests/vfwsub.c +++ b/auto-generated/llvm-api-tests/vfwsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vget.c b/auto-generated/llvm-api-tests/vget.c index 0a1ef78b9..4cdc31aee 100644 --- a/auto-generated/llvm-api-tests/vget.c +++ b/auto-generated/llvm-api-tests/vget.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vle16.c b/auto-generated/llvm-api-tests/vle16.c index 1edb8b2e5..ca80781cd 100644 --- a/auto-generated/llvm-api-tests/vle16.c +++ b/auto-generated/llvm-api-tests/vle16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vle16ff.c b/auto-generated/llvm-api-tests/vle16ff.c index 3ada7b0b7..7fba41965 100644 --- a/auto-generated/llvm-api-tests/vle16ff.c +++ b/auto-generated/llvm-api-tests/vle16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vle32.c b/auto-generated/llvm-api-tests/vle32.c index 23abaac35..96b4f1adb 100644 --- a/auto-generated/llvm-api-tests/vle32.c +++ b/auto-generated/llvm-api-tests/vle32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vle32ff.c b/auto-generated/llvm-api-tests/vle32ff.c index 13d16aba7..168d3b626 100644 --- a/auto-generated/llvm-api-tests/vle32ff.c +++ b/auto-generated/llvm-api-tests/vle32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vle64.c b/auto-generated/llvm-api-tests/vle64.c index 89b222c0f..129e9136b 100644 --- a/auto-generated/llvm-api-tests/vle64.c +++ b/auto-generated/llvm-api-tests/vle64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vle64ff.c b/auto-generated/llvm-api-tests/vle64ff.c index e0e57702c..d675688d1 100644 --- a/auto-generated/llvm-api-tests/vle64ff.c +++ b/auto-generated/llvm-api-tests/vle64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vle8.c b/auto-generated/llvm-api-tests/vle8.c index cf7ad5a70..f3b7ebdcb 100644 --- a/auto-generated/llvm-api-tests/vle8.c +++ b/auto-generated/llvm-api-tests/vle8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vle8ff.c b/auto-generated/llvm-api-tests/vle8ff.c index 89e9fa727..40bcb637b 100644 --- a/auto-generated/llvm-api-tests/vle8ff.c +++ b/auto-generated/llvm-api-tests/vle8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlmul_ext_v.c b/auto-generated/llvm-api-tests/vlmul_ext_v.c index 4ba4b622d..f5efa7d37 100644 --- a/auto-generated/llvm-api-tests/vlmul_ext_v.c +++ b/auto-generated/llvm-api-tests/vlmul_ext_v.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlmul_trunc_v.c b/auto-generated/llvm-api-tests/vlmul_trunc_v.c index e818f02b8..5593e0c02 100644 --- a/auto-generated/llvm-api-tests/vlmul_trunc_v.c +++ b/auto-generated/llvm-api-tests/vlmul_trunc_v.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxei16.c b/auto-generated/llvm-api-tests/vloxei16.c index 4a3d841ac..6aac7d028 100644 --- a/auto-generated/llvm-api-tests/vloxei16.c +++ b/auto-generated/llvm-api-tests/vloxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxei32.c b/auto-generated/llvm-api-tests/vloxei32.c index 4e11ae983..ddb683a7e 100644 --- a/auto-generated/llvm-api-tests/vloxei32.c +++ b/auto-generated/llvm-api-tests/vloxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxei64.c b/auto-generated/llvm-api-tests/vloxei64.c index 83c8b97ea..92fc3f303 100644 --- a/auto-generated/llvm-api-tests/vloxei64.c +++ b/auto-generated/llvm-api-tests/vloxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxei8.c b/auto-generated/llvm-api-tests/vloxei8.c index 853c7222c..13f1fb3c2 100644 --- a/auto-generated/llvm-api-tests/vloxei8.c +++ b/auto-generated/llvm-api-tests/vloxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg2ei16.c b/auto-generated/llvm-api-tests/vloxseg2ei16.c index 02df39e7d..71129dee7 100644 --- a/auto-generated/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/llvm-api-tests/vloxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg2ei32.c b/auto-generated/llvm-api-tests/vloxseg2ei32.c index bcaf9bda8..58e353881 100644 --- a/auto-generated/llvm-api-tests/vloxseg2ei32.c +++ b/auto-generated/llvm-api-tests/vloxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg2ei64.c b/auto-generated/llvm-api-tests/vloxseg2ei64.c index 15c0eb168..03cd5d99b 100644 --- a/auto-generated/llvm-api-tests/vloxseg2ei64.c +++ b/auto-generated/llvm-api-tests/vloxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg2ei8.c b/auto-generated/llvm-api-tests/vloxseg2ei8.c index daa5391f1..bf9b24527 100644 --- a/auto-generated/llvm-api-tests/vloxseg2ei8.c +++ b/auto-generated/llvm-api-tests/vloxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg3ei16.c b/auto-generated/llvm-api-tests/vloxseg3ei16.c index eff43cb00..eec681cbf 100644 --- a/auto-generated/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/llvm-api-tests/vloxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg3ei32.c b/auto-generated/llvm-api-tests/vloxseg3ei32.c index 9c22904f6..a70173a1d 100644 --- a/auto-generated/llvm-api-tests/vloxseg3ei32.c +++ b/auto-generated/llvm-api-tests/vloxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg3ei64.c b/auto-generated/llvm-api-tests/vloxseg3ei64.c index 3ed198662..d036adb1c 100644 --- a/auto-generated/llvm-api-tests/vloxseg3ei64.c +++ b/auto-generated/llvm-api-tests/vloxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg3ei8.c b/auto-generated/llvm-api-tests/vloxseg3ei8.c index 8f2aa2c20..9eed40296 100644 --- a/auto-generated/llvm-api-tests/vloxseg3ei8.c +++ b/auto-generated/llvm-api-tests/vloxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg4ei16.c b/auto-generated/llvm-api-tests/vloxseg4ei16.c index d7265f89a..7b1c86cd3 100644 --- a/auto-generated/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/llvm-api-tests/vloxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg4ei32.c b/auto-generated/llvm-api-tests/vloxseg4ei32.c index 0486fb77d..24ffa725b 100644 --- a/auto-generated/llvm-api-tests/vloxseg4ei32.c +++ b/auto-generated/llvm-api-tests/vloxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg4ei64.c b/auto-generated/llvm-api-tests/vloxseg4ei64.c index de6f1a886..7f49b027e 100644 --- a/auto-generated/llvm-api-tests/vloxseg4ei64.c +++ b/auto-generated/llvm-api-tests/vloxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg4ei8.c b/auto-generated/llvm-api-tests/vloxseg4ei8.c index d0defec82..6b04e997b 100644 --- a/auto-generated/llvm-api-tests/vloxseg4ei8.c +++ b/auto-generated/llvm-api-tests/vloxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg5ei16.c b/auto-generated/llvm-api-tests/vloxseg5ei16.c index 0d2b44bea..1a1a1b2ed 100644 --- a/auto-generated/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/llvm-api-tests/vloxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg5ei32.c b/auto-generated/llvm-api-tests/vloxseg5ei32.c index a80622c93..836f47939 100644 --- a/auto-generated/llvm-api-tests/vloxseg5ei32.c +++ b/auto-generated/llvm-api-tests/vloxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg5ei64.c b/auto-generated/llvm-api-tests/vloxseg5ei64.c index 9b13b99fc..ba914fd2f 100644 --- a/auto-generated/llvm-api-tests/vloxseg5ei64.c +++ b/auto-generated/llvm-api-tests/vloxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg5ei8.c b/auto-generated/llvm-api-tests/vloxseg5ei8.c index db6007934..6bce373aa 100644 --- a/auto-generated/llvm-api-tests/vloxseg5ei8.c +++ b/auto-generated/llvm-api-tests/vloxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg6ei16.c b/auto-generated/llvm-api-tests/vloxseg6ei16.c index 4415a7ab7..c27226625 100644 --- a/auto-generated/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/llvm-api-tests/vloxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg6ei32.c b/auto-generated/llvm-api-tests/vloxseg6ei32.c index 637fd8d87..61413cd94 100644 --- a/auto-generated/llvm-api-tests/vloxseg6ei32.c +++ b/auto-generated/llvm-api-tests/vloxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg6ei64.c b/auto-generated/llvm-api-tests/vloxseg6ei64.c index 8d9b8736e..ff68337f5 100644 --- a/auto-generated/llvm-api-tests/vloxseg6ei64.c +++ b/auto-generated/llvm-api-tests/vloxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg6ei8.c b/auto-generated/llvm-api-tests/vloxseg6ei8.c index 5d4eb2a2a..1592801ff 100644 --- a/auto-generated/llvm-api-tests/vloxseg6ei8.c +++ b/auto-generated/llvm-api-tests/vloxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg7ei16.c b/auto-generated/llvm-api-tests/vloxseg7ei16.c index 6080aa71f..c2f3fac8c 100644 --- a/auto-generated/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/llvm-api-tests/vloxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg7ei32.c b/auto-generated/llvm-api-tests/vloxseg7ei32.c index 2d5dcbceb..7d8bc264b 100644 --- a/auto-generated/llvm-api-tests/vloxseg7ei32.c +++ b/auto-generated/llvm-api-tests/vloxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg7ei64.c b/auto-generated/llvm-api-tests/vloxseg7ei64.c index 02241410c..398e51c31 100644 --- a/auto-generated/llvm-api-tests/vloxseg7ei64.c +++ b/auto-generated/llvm-api-tests/vloxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg7ei8.c b/auto-generated/llvm-api-tests/vloxseg7ei8.c index 76401de47..206b328f2 100644 --- a/auto-generated/llvm-api-tests/vloxseg7ei8.c +++ b/auto-generated/llvm-api-tests/vloxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg8ei16.c b/auto-generated/llvm-api-tests/vloxseg8ei16.c index 938a60c4b..3966b114a 100644 --- a/auto-generated/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/llvm-api-tests/vloxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg8ei32.c b/auto-generated/llvm-api-tests/vloxseg8ei32.c index ce285812a..128142a26 100644 --- a/auto-generated/llvm-api-tests/vloxseg8ei32.c +++ b/auto-generated/llvm-api-tests/vloxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg8ei64.c b/auto-generated/llvm-api-tests/vloxseg8ei64.c index b1edf81ed..1403d63e3 100644 --- a/auto-generated/llvm-api-tests/vloxseg8ei64.c +++ b/auto-generated/llvm-api-tests/vloxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vloxseg8ei8.c b/auto-generated/llvm-api-tests/vloxseg8ei8.c index fe0929cc4..e5dd5e467 100644 --- a/auto-generated/llvm-api-tests/vloxseg8ei8.c +++ b/auto-generated/llvm-api-tests/vloxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlse16.c b/auto-generated/llvm-api-tests/vlse16.c index 90e15466e..eab608682 100644 --- a/auto-generated/llvm-api-tests/vlse16.c +++ b/auto-generated/llvm-api-tests/vlse16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlse32.c b/auto-generated/llvm-api-tests/vlse32.c index 251f358b1..1cfb98b97 100644 --- a/auto-generated/llvm-api-tests/vlse32.c +++ b/auto-generated/llvm-api-tests/vlse32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlse64.c b/auto-generated/llvm-api-tests/vlse64.c index 68d9b5b2a..061ade920 100644 --- a/auto-generated/llvm-api-tests/vlse64.c +++ b/auto-generated/llvm-api-tests/vlse64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg2e16.c b/auto-generated/llvm-api-tests/vlseg2e16.c index bc8b9bad9..71f25cf95 100644 --- a/auto-generated/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/llvm-api-tests/vlseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg2e16ff.c b/auto-generated/llvm-api-tests/vlseg2e16ff.c index b97992fd8..99268dc34 100644 --- a/auto-generated/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/llvm-api-tests/vlseg2e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg2e32.c b/auto-generated/llvm-api-tests/vlseg2e32.c index ef79ea766..ae76f5585 100644 --- a/auto-generated/llvm-api-tests/vlseg2e32.c +++ b/auto-generated/llvm-api-tests/vlseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg2e32ff.c b/auto-generated/llvm-api-tests/vlseg2e32ff.c index 8facb295d..bc3c05f5b 100644 --- a/auto-generated/llvm-api-tests/vlseg2e32ff.c +++ b/auto-generated/llvm-api-tests/vlseg2e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg2e64.c b/auto-generated/llvm-api-tests/vlseg2e64.c index 48ceb013d..34c0d58b8 100644 --- a/auto-generated/llvm-api-tests/vlseg2e64.c +++ b/auto-generated/llvm-api-tests/vlseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg2e64ff.c b/auto-generated/llvm-api-tests/vlseg2e64ff.c index 361262628..7699a25e6 100644 --- a/auto-generated/llvm-api-tests/vlseg2e64ff.c +++ b/auto-generated/llvm-api-tests/vlseg2e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg2e8ff.c b/auto-generated/llvm-api-tests/vlseg2e8ff.c index 3a4cd942e..72b14cbbd 100644 --- a/auto-generated/llvm-api-tests/vlseg2e8ff.c +++ b/auto-generated/llvm-api-tests/vlseg2e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg3e16.c b/auto-generated/llvm-api-tests/vlseg3e16.c index 711d3a29a..98e3744bf 100644 --- a/auto-generated/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/llvm-api-tests/vlseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg3e16ff.c b/auto-generated/llvm-api-tests/vlseg3e16ff.c index 231804341..0c496428c 100644 --- a/auto-generated/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/llvm-api-tests/vlseg3e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg3e32.c b/auto-generated/llvm-api-tests/vlseg3e32.c index 51f9a55e4..84e99d77a 100644 --- a/auto-generated/llvm-api-tests/vlseg3e32.c +++ b/auto-generated/llvm-api-tests/vlseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg3e32ff.c b/auto-generated/llvm-api-tests/vlseg3e32ff.c index 5b93ced09..5e8c66e0f 100644 --- a/auto-generated/llvm-api-tests/vlseg3e32ff.c +++ b/auto-generated/llvm-api-tests/vlseg3e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg3e64.c b/auto-generated/llvm-api-tests/vlseg3e64.c index c156869fe..397d61da9 100644 --- a/auto-generated/llvm-api-tests/vlseg3e64.c +++ b/auto-generated/llvm-api-tests/vlseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg3e64ff.c b/auto-generated/llvm-api-tests/vlseg3e64ff.c index f1f2a19c9..9d820e598 100644 --- a/auto-generated/llvm-api-tests/vlseg3e64ff.c +++ b/auto-generated/llvm-api-tests/vlseg3e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg3e8ff.c b/auto-generated/llvm-api-tests/vlseg3e8ff.c index 53131921d..7676dbddc 100644 --- a/auto-generated/llvm-api-tests/vlseg3e8ff.c +++ b/auto-generated/llvm-api-tests/vlseg3e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg4e16.c b/auto-generated/llvm-api-tests/vlseg4e16.c index 32d655d0f..9c3c85978 100644 --- a/auto-generated/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/llvm-api-tests/vlseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg4e16ff.c b/auto-generated/llvm-api-tests/vlseg4e16ff.c index e2a731656..a3424dac0 100644 --- a/auto-generated/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/llvm-api-tests/vlseg4e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg4e32.c b/auto-generated/llvm-api-tests/vlseg4e32.c index fe3f00ce6..cc48e8770 100644 --- a/auto-generated/llvm-api-tests/vlseg4e32.c +++ b/auto-generated/llvm-api-tests/vlseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg4e32ff.c b/auto-generated/llvm-api-tests/vlseg4e32ff.c index 9f444d9e5..7e1712f5b 100644 --- a/auto-generated/llvm-api-tests/vlseg4e32ff.c +++ b/auto-generated/llvm-api-tests/vlseg4e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg4e64.c b/auto-generated/llvm-api-tests/vlseg4e64.c index 2c722cea4..068480511 100644 --- a/auto-generated/llvm-api-tests/vlseg4e64.c +++ b/auto-generated/llvm-api-tests/vlseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg4e64ff.c b/auto-generated/llvm-api-tests/vlseg4e64ff.c index 742d58c7f..1e6b01a8d 100644 --- a/auto-generated/llvm-api-tests/vlseg4e64ff.c +++ b/auto-generated/llvm-api-tests/vlseg4e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg4e8ff.c b/auto-generated/llvm-api-tests/vlseg4e8ff.c index 0bbc04d02..cca3754b1 100644 --- a/auto-generated/llvm-api-tests/vlseg4e8ff.c +++ b/auto-generated/llvm-api-tests/vlseg4e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg5e16.c b/auto-generated/llvm-api-tests/vlseg5e16.c index ea0ed124b..e92cfbabe 100644 --- a/auto-generated/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/llvm-api-tests/vlseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg5e16ff.c b/auto-generated/llvm-api-tests/vlseg5e16ff.c index c3b8d8eba..f5dc0b4b2 100644 --- a/auto-generated/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/llvm-api-tests/vlseg5e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg5e32.c b/auto-generated/llvm-api-tests/vlseg5e32.c index 21da0e4b7..66811f1fd 100644 --- a/auto-generated/llvm-api-tests/vlseg5e32.c +++ b/auto-generated/llvm-api-tests/vlseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg5e32ff.c b/auto-generated/llvm-api-tests/vlseg5e32ff.c index 21a95ce70..bcb66c87e 100644 --- a/auto-generated/llvm-api-tests/vlseg5e32ff.c +++ b/auto-generated/llvm-api-tests/vlseg5e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg5e64.c b/auto-generated/llvm-api-tests/vlseg5e64.c index f6647642d..d10cbeb5d 100644 --- a/auto-generated/llvm-api-tests/vlseg5e64.c +++ b/auto-generated/llvm-api-tests/vlseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg5e64ff.c b/auto-generated/llvm-api-tests/vlseg5e64ff.c index 8fadaf499..b0069d3b7 100644 --- a/auto-generated/llvm-api-tests/vlseg5e64ff.c +++ b/auto-generated/llvm-api-tests/vlseg5e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg5e8ff.c b/auto-generated/llvm-api-tests/vlseg5e8ff.c index 558d621b0..0a0d83444 100644 --- a/auto-generated/llvm-api-tests/vlseg5e8ff.c +++ b/auto-generated/llvm-api-tests/vlseg5e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg6e16.c b/auto-generated/llvm-api-tests/vlseg6e16.c index b7ce52ee2..6bb7496a6 100644 --- a/auto-generated/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/llvm-api-tests/vlseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg6e16ff.c b/auto-generated/llvm-api-tests/vlseg6e16ff.c index a63f555d7..639729e02 100644 --- a/auto-generated/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/llvm-api-tests/vlseg6e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg6e32.c b/auto-generated/llvm-api-tests/vlseg6e32.c index acb7c1d44..92463b7fe 100644 --- a/auto-generated/llvm-api-tests/vlseg6e32.c +++ b/auto-generated/llvm-api-tests/vlseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg6e32ff.c b/auto-generated/llvm-api-tests/vlseg6e32ff.c index 363e14621..034614b1f 100644 --- a/auto-generated/llvm-api-tests/vlseg6e32ff.c +++ b/auto-generated/llvm-api-tests/vlseg6e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg6e64.c b/auto-generated/llvm-api-tests/vlseg6e64.c index 31453fcd8..dc86357ce 100644 --- a/auto-generated/llvm-api-tests/vlseg6e64.c +++ b/auto-generated/llvm-api-tests/vlseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg6e64ff.c b/auto-generated/llvm-api-tests/vlseg6e64ff.c index e4ea835c2..6cce1d8d5 100644 --- a/auto-generated/llvm-api-tests/vlseg6e64ff.c +++ b/auto-generated/llvm-api-tests/vlseg6e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg6e8ff.c b/auto-generated/llvm-api-tests/vlseg6e8ff.c index cd0310064..30df3e7ab 100644 --- a/auto-generated/llvm-api-tests/vlseg6e8ff.c +++ b/auto-generated/llvm-api-tests/vlseg6e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg7e16.c b/auto-generated/llvm-api-tests/vlseg7e16.c index 77ff3e6d1..843b4b90f 100644 --- a/auto-generated/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/llvm-api-tests/vlseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg7e16ff.c b/auto-generated/llvm-api-tests/vlseg7e16ff.c index a396ff1cf..36c0c3fde 100644 --- a/auto-generated/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/llvm-api-tests/vlseg7e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg7e32.c b/auto-generated/llvm-api-tests/vlseg7e32.c index 4854831c9..f2be18bfa 100644 --- a/auto-generated/llvm-api-tests/vlseg7e32.c +++ b/auto-generated/llvm-api-tests/vlseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg7e32ff.c b/auto-generated/llvm-api-tests/vlseg7e32ff.c index ff216e035..ded22607d 100644 --- a/auto-generated/llvm-api-tests/vlseg7e32ff.c +++ b/auto-generated/llvm-api-tests/vlseg7e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg7e64.c b/auto-generated/llvm-api-tests/vlseg7e64.c index e9e86c28d..fd21f691d 100644 --- a/auto-generated/llvm-api-tests/vlseg7e64.c +++ b/auto-generated/llvm-api-tests/vlseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg7e64ff.c b/auto-generated/llvm-api-tests/vlseg7e64ff.c index 8db61b1d7..d26691383 100644 --- a/auto-generated/llvm-api-tests/vlseg7e64ff.c +++ b/auto-generated/llvm-api-tests/vlseg7e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg7e8ff.c b/auto-generated/llvm-api-tests/vlseg7e8ff.c index 8553f8dd7..c61054187 100644 --- a/auto-generated/llvm-api-tests/vlseg7e8ff.c +++ b/auto-generated/llvm-api-tests/vlseg7e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg8e16.c b/auto-generated/llvm-api-tests/vlseg8e16.c index 3f4871f13..25a8ba021 100644 --- a/auto-generated/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/llvm-api-tests/vlseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg8e16ff.c b/auto-generated/llvm-api-tests/vlseg8e16ff.c index cb89b04e0..a97949ea8 100644 --- a/auto-generated/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/llvm-api-tests/vlseg8e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg8e32.c b/auto-generated/llvm-api-tests/vlseg8e32.c index 517199ff5..f6e57b2b3 100644 --- a/auto-generated/llvm-api-tests/vlseg8e32.c +++ b/auto-generated/llvm-api-tests/vlseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg8e32ff.c b/auto-generated/llvm-api-tests/vlseg8e32ff.c index 9d2cead21..5628e7350 100644 --- a/auto-generated/llvm-api-tests/vlseg8e32ff.c +++ b/auto-generated/llvm-api-tests/vlseg8e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg8e64.c b/auto-generated/llvm-api-tests/vlseg8e64.c index 527a8ae90..f27101799 100644 --- a/auto-generated/llvm-api-tests/vlseg8e64.c +++ b/auto-generated/llvm-api-tests/vlseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg8e64ff.c b/auto-generated/llvm-api-tests/vlseg8e64ff.c index c75296ca4..ab17e31e1 100644 --- a/auto-generated/llvm-api-tests/vlseg8e64ff.c +++ b/auto-generated/llvm-api-tests/vlseg8e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlseg8e8ff.c b/auto-generated/llvm-api-tests/vlseg8e8ff.c index 9d34f4b36..73f4a5256 100644 --- a/auto-generated/llvm-api-tests/vlseg8e8ff.c +++ b/auto-generated/llvm-api-tests/vlseg8e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg2e16.c b/auto-generated/llvm-api-tests/vlsseg2e16.c index 874697cec..977ce06a9 100644 --- a/auto-generated/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/llvm-api-tests/vlsseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg2e32.c b/auto-generated/llvm-api-tests/vlsseg2e32.c index 4c1b9d906..0dad46d52 100644 --- a/auto-generated/llvm-api-tests/vlsseg2e32.c +++ b/auto-generated/llvm-api-tests/vlsseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg2e64.c b/auto-generated/llvm-api-tests/vlsseg2e64.c index 6fcce87ae..2a0d9bdc8 100644 --- a/auto-generated/llvm-api-tests/vlsseg2e64.c +++ b/auto-generated/llvm-api-tests/vlsseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg3e16.c b/auto-generated/llvm-api-tests/vlsseg3e16.c index 6f350d383..0367c073e 100644 --- a/auto-generated/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/llvm-api-tests/vlsseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg3e32.c b/auto-generated/llvm-api-tests/vlsseg3e32.c index 4bff8b886..adcc52ab0 100644 --- a/auto-generated/llvm-api-tests/vlsseg3e32.c +++ b/auto-generated/llvm-api-tests/vlsseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg3e64.c b/auto-generated/llvm-api-tests/vlsseg3e64.c index 5f2283c60..88feb12c0 100644 --- a/auto-generated/llvm-api-tests/vlsseg3e64.c +++ b/auto-generated/llvm-api-tests/vlsseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg4e16.c b/auto-generated/llvm-api-tests/vlsseg4e16.c index e2d8a4ab4..09b4c4b4e 100644 --- a/auto-generated/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/llvm-api-tests/vlsseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg4e32.c b/auto-generated/llvm-api-tests/vlsseg4e32.c index 472380cb7..a37c602b4 100644 --- a/auto-generated/llvm-api-tests/vlsseg4e32.c +++ b/auto-generated/llvm-api-tests/vlsseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg4e64.c b/auto-generated/llvm-api-tests/vlsseg4e64.c index 12cffbdb4..8e628bf23 100644 --- a/auto-generated/llvm-api-tests/vlsseg4e64.c +++ b/auto-generated/llvm-api-tests/vlsseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg5e16.c b/auto-generated/llvm-api-tests/vlsseg5e16.c index 70c7565c9..d09b1115c 100644 --- a/auto-generated/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/llvm-api-tests/vlsseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg5e32.c b/auto-generated/llvm-api-tests/vlsseg5e32.c index 41ecab1ac..291dd473e 100644 --- a/auto-generated/llvm-api-tests/vlsseg5e32.c +++ b/auto-generated/llvm-api-tests/vlsseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg5e64.c b/auto-generated/llvm-api-tests/vlsseg5e64.c index 8d798bb84..9b3bf3900 100644 --- a/auto-generated/llvm-api-tests/vlsseg5e64.c +++ b/auto-generated/llvm-api-tests/vlsseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg6e16.c b/auto-generated/llvm-api-tests/vlsseg6e16.c index e8723bbe7..a9c7a1ee8 100644 --- a/auto-generated/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/llvm-api-tests/vlsseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg6e32.c b/auto-generated/llvm-api-tests/vlsseg6e32.c index ef4581d9a..921f006fb 100644 --- a/auto-generated/llvm-api-tests/vlsseg6e32.c +++ b/auto-generated/llvm-api-tests/vlsseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg6e64.c b/auto-generated/llvm-api-tests/vlsseg6e64.c index 37362e168..83da13a50 100644 --- a/auto-generated/llvm-api-tests/vlsseg6e64.c +++ b/auto-generated/llvm-api-tests/vlsseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg7e16.c b/auto-generated/llvm-api-tests/vlsseg7e16.c index 6f750f528..f8c3d9c9a 100644 --- a/auto-generated/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/llvm-api-tests/vlsseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg7e32.c b/auto-generated/llvm-api-tests/vlsseg7e32.c index 71571b5df..59304da74 100644 --- a/auto-generated/llvm-api-tests/vlsseg7e32.c +++ b/auto-generated/llvm-api-tests/vlsseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg7e64.c b/auto-generated/llvm-api-tests/vlsseg7e64.c index d135a3ae2..505d986d4 100644 --- a/auto-generated/llvm-api-tests/vlsseg7e64.c +++ b/auto-generated/llvm-api-tests/vlsseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg8e16.c b/auto-generated/llvm-api-tests/vlsseg8e16.c index 94f212485..d5cc4e324 100644 --- a/auto-generated/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/llvm-api-tests/vlsseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg8e32.c b/auto-generated/llvm-api-tests/vlsseg8e32.c index e28628ddd..fe9045a17 100644 --- a/auto-generated/llvm-api-tests/vlsseg8e32.c +++ b/auto-generated/llvm-api-tests/vlsseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vlsseg8e64.c b/auto-generated/llvm-api-tests/vlsseg8e64.c index c683ce2ae..30d5cc318 100644 --- a/auto-generated/llvm-api-tests/vlsseg8e64.c +++ b/auto-generated/llvm-api-tests/vlsseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxei16.c b/auto-generated/llvm-api-tests/vluxei16.c index b4d820ff5..6b281a470 100644 --- a/auto-generated/llvm-api-tests/vluxei16.c +++ b/auto-generated/llvm-api-tests/vluxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxei32.c b/auto-generated/llvm-api-tests/vluxei32.c index 12ef078aa..bde2bc58a 100644 --- a/auto-generated/llvm-api-tests/vluxei32.c +++ b/auto-generated/llvm-api-tests/vluxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxei64.c b/auto-generated/llvm-api-tests/vluxei64.c index b4ab9a9af..5a41f809e 100644 --- a/auto-generated/llvm-api-tests/vluxei64.c +++ b/auto-generated/llvm-api-tests/vluxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxei8.c b/auto-generated/llvm-api-tests/vluxei8.c index 8e96e51d0..9b50256f8 100644 --- a/auto-generated/llvm-api-tests/vluxei8.c +++ b/auto-generated/llvm-api-tests/vluxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg2ei16.c b/auto-generated/llvm-api-tests/vluxseg2ei16.c index 31ba6d213..b5ae65212 100644 --- a/auto-generated/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/llvm-api-tests/vluxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg2ei32.c b/auto-generated/llvm-api-tests/vluxseg2ei32.c index be01c09c0..af09beb50 100644 --- a/auto-generated/llvm-api-tests/vluxseg2ei32.c +++ b/auto-generated/llvm-api-tests/vluxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg2ei64.c b/auto-generated/llvm-api-tests/vluxseg2ei64.c index 429a7a6d8..d8713eadb 100644 --- a/auto-generated/llvm-api-tests/vluxseg2ei64.c +++ b/auto-generated/llvm-api-tests/vluxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg2ei8.c b/auto-generated/llvm-api-tests/vluxseg2ei8.c index ea611470e..7d961c616 100644 --- a/auto-generated/llvm-api-tests/vluxseg2ei8.c +++ b/auto-generated/llvm-api-tests/vluxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg3ei16.c b/auto-generated/llvm-api-tests/vluxseg3ei16.c index a7c14b660..dc9562f47 100644 --- a/auto-generated/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/llvm-api-tests/vluxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg3ei32.c b/auto-generated/llvm-api-tests/vluxseg3ei32.c index bfe9cb7c2..65d033a2c 100644 --- a/auto-generated/llvm-api-tests/vluxseg3ei32.c +++ b/auto-generated/llvm-api-tests/vluxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg3ei64.c b/auto-generated/llvm-api-tests/vluxseg3ei64.c index 3b6170f4e..70c6be6c2 100644 --- a/auto-generated/llvm-api-tests/vluxseg3ei64.c +++ b/auto-generated/llvm-api-tests/vluxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg3ei8.c b/auto-generated/llvm-api-tests/vluxseg3ei8.c index 7bb1b24c2..4cf9e703f 100644 --- a/auto-generated/llvm-api-tests/vluxseg3ei8.c +++ b/auto-generated/llvm-api-tests/vluxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg4ei16.c b/auto-generated/llvm-api-tests/vluxseg4ei16.c index 91d9b36c3..d65918fe7 100644 --- a/auto-generated/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/llvm-api-tests/vluxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg4ei32.c b/auto-generated/llvm-api-tests/vluxseg4ei32.c index a5bd87d38..3d14e39be 100644 --- a/auto-generated/llvm-api-tests/vluxseg4ei32.c +++ b/auto-generated/llvm-api-tests/vluxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg4ei64.c b/auto-generated/llvm-api-tests/vluxseg4ei64.c index 6fbe922cf..e55084b40 100644 --- a/auto-generated/llvm-api-tests/vluxseg4ei64.c +++ b/auto-generated/llvm-api-tests/vluxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg4ei8.c b/auto-generated/llvm-api-tests/vluxseg4ei8.c index db4ed1b5a..a7f3dadce 100644 --- a/auto-generated/llvm-api-tests/vluxseg4ei8.c +++ b/auto-generated/llvm-api-tests/vluxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg5ei16.c b/auto-generated/llvm-api-tests/vluxseg5ei16.c index 9058c4d9c..7edfef24a 100644 --- a/auto-generated/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/llvm-api-tests/vluxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg5ei32.c b/auto-generated/llvm-api-tests/vluxseg5ei32.c index cd6040fb4..b0d47ac20 100644 --- a/auto-generated/llvm-api-tests/vluxseg5ei32.c +++ b/auto-generated/llvm-api-tests/vluxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg5ei64.c b/auto-generated/llvm-api-tests/vluxseg5ei64.c index 49e878f72..2344d95c1 100644 --- a/auto-generated/llvm-api-tests/vluxseg5ei64.c +++ b/auto-generated/llvm-api-tests/vluxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg5ei8.c b/auto-generated/llvm-api-tests/vluxseg5ei8.c index bb646f12c..b2c831c78 100644 --- a/auto-generated/llvm-api-tests/vluxseg5ei8.c +++ b/auto-generated/llvm-api-tests/vluxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg6ei16.c b/auto-generated/llvm-api-tests/vluxseg6ei16.c index d5b9289da..6a0dedfac 100644 --- a/auto-generated/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/llvm-api-tests/vluxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg6ei32.c b/auto-generated/llvm-api-tests/vluxseg6ei32.c index 3d83184a6..f4c89dba4 100644 --- a/auto-generated/llvm-api-tests/vluxseg6ei32.c +++ b/auto-generated/llvm-api-tests/vluxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg6ei64.c b/auto-generated/llvm-api-tests/vluxseg6ei64.c index b51c09de4..086bd24fb 100644 --- a/auto-generated/llvm-api-tests/vluxseg6ei64.c +++ b/auto-generated/llvm-api-tests/vluxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg6ei8.c b/auto-generated/llvm-api-tests/vluxseg6ei8.c index 658cbd0bf..fc111a312 100644 --- a/auto-generated/llvm-api-tests/vluxseg6ei8.c +++ b/auto-generated/llvm-api-tests/vluxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg7ei16.c b/auto-generated/llvm-api-tests/vluxseg7ei16.c index 468bce389..41e131ad0 100644 --- a/auto-generated/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/llvm-api-tests/vluxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg7ei32.c b/auto-generated/llvm-api-tests/vluxseg7ei32.c index b43d48ecf..f4b52a4f8 100644 --- a/auto-generated/llvm-api-tests/vluxseg7ei32.c +++ b/auto-generated/llvm-api-tests/vluxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg7ei64.c b/auto-generated/llvm-api-tests/vluxseg7ei64.c index c92945c99..ad8d20a76 100644 --- a/auto-generated/llvm-api-tests/vluxseg7ei64.c +++ b/auto-generated/llvm-api-tests/vluxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg7ei8.c b/auto-generated/llvm-api-tests/vluxseg7ei8.c index ba0d86f3e..9769d1f41 100644 --- a/auto-generated/llvm-api-tests/vluxseg7ei8.c +++ b/auto-generated/llvm-api-tests/vluxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg8ei16.c b/auto-generated/llvm-api-tests/vluxseg8ei16.c index 44908b9a1..582ec4146 100644 --- a/auto-generated/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/llvm-api-tests/vluxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg8ei32.c b/auto-generated/llvm-api-tests/vluxseg8ei32.c index a4b1219be..e088d1b62 100644 --- a/auto-generated/llvm-api-tests/vluxseg8ei32.c +++ b/auto-generated/llvm-api-tests/vluxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg8ei64.c b/auto-generated/llvm-api-tests/vluxseg8ei64.c index 89d51431f..47aa7ae28 100644 --- a/auto-generated/llvm-api-tests/vluxseg8ei64.c +++ b/auto-generated/llvm-api-tests/vluxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vluxseg8ei8.c b/auto-generated/llvm-api-tests/vluxseg8ei8.c index b15309835..690158f00 100644 --- a/auto-generated/llvm-api-tests/vluxseg8ei8.c +++ b/auto-generated/llvm-api-tests/vluxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmacc.c b/auto-generated/llvm-api-tests/vmacc.c index 5c9b9dba9..8a1e72d22 100644 --- a/auto-generated/llvm-api-tests/vmacc.c +++ b/auto-generated/llvm-api-tests/vmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmadd.c b/auto-generated/llvm-api-tests/vmadd.c index dee28e0f2..9639cbdab 100644 --- a/auto-generated/llvm-api-tests/vmadd.c +++ b/auto-generated/llvm-api-tests/vmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmerge.c b/auto-generated/llvm-api-tests/vmerge.c index 39511afe8..2d4ed4402 100644 --- a/auto-generated/llvm-api-tests/vmerge.c +++ b/auto-generated/llvm-api-tests/vmerge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmfeq.c b/auto-generated/llvm-api-tests/vmfeq.c index 326530d06..791bf1a4a 100644 --- a/auto-generated/llvm-api-tests/vmfeq.c +++ b/auto-generated/llvm-api-tests/vmfeq.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmfge.c b/auto-generated/llvm-api-tests/vmfge.c index bff02eedf..adb686e36 100644 --- a/auto-generated/llvm-api-tests/vmfge.c +++ b/auto-generated/llvm-api-tests/vmfge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmfgt.c b/auto-generated/llvm-api-tests/vmfgt.c index 86f04c353..84836951d 100644 --- a/auto-generated/llvm-api-tests/vmfgt.c +++ b/auto-generated/llvm-api-tests/vmfgt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmfle.c b/auto-generated/llvm-api-tests/vmfle.c index 36d68ca41..f90ae9d3d 100644 --- a/auto-generated/llvm-api-tests/vmfle.c +++ b/auto-generated/llvm-api-tests/vmfle.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmflt.c b/auto-generated/llvm-api-tests/vmflt.c index 0b0604928..95863ba18 100644 --- a/auto-generated/llvm-api-tests/vmflt.c +++ b/auto-generated/llvm-api-tests/vmflt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmfne.c b/auto-generated/llvm-api-tests/vmfne.c index 786f58720..0881e5478 100644 --- a/auto-generated/llvm-api-tests/vmfne.c +++ b/auto-generated/llvm-api-tests/vmfne.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmmv.c b/auto-generated/llvm-api-tests/vmmv.c index c786f72ff..7c8f669ed 100644 --- a/auto-generated/llvm-api-tests/vmmv.c +++ b/auto-generated/llvm-api-tests/vmmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmseq.c b/auto-generated/llvm-api-tests/vmseq.c index bc722ffdc..3abc8879a 100644 --- a/auto-generated/llvm-api-tests/vmseq.c +++ b/auto-generated/llvm-api-tests/vmseq.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmsge.c b/auto-generated/llvm-api-tests/vmsge.c index 0d400280f..eeb06b8f2 100644 --- a/auto-generated/llvm-api-tests/vmsge.c +++ b/auto-generated/llvm-api-tests/vmsge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmsgeu.c b/auto-generated/llvm-api-tests/vmsgeu.c index 07fd78ee8..c4ccd06b9 100644 --- a/auto-generated/llvm-api-tests/vmsgeu.c +++ b/auto-generated/llvm-api-tests/vmsgeu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmsgt.c b/auto-generated/llvm-api-tests/vmsgt.c index 8ea07b8d5..bdba1b550 100644 --- a/auto-generated/llvm-api-tests/vmsgt.c +++ b/auto-generated/llvm-api-tests/vmsgt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmsgtu.c b/auto-generated/llvm-api-tests/vmsgtu.c index 816d16f7d..af7283923 100644 --- a/auto-generated/llvm-api-tests/vmsgtu.c +++ b/auto-generated/llvm-api-tests/vmsgtu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmsle.c b/auto-generated/llvm-api-tests/vmsle.c index 2da595dac..955a81cfa 100644 --- a/auto-generated/llvm-api-tests/vmsle.c +++ b/auto-generated/llvm-api-tests/vmsle.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmsleu.c b/auto-generated/llvm-api-tests/vmsleu.c index 96bac8994..e46f71abc 100644 --- a/auto-generated/llvm-api-tests/vmsleu.c +++ b/auto-generated/llvm-api-tests/vmsleu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmslt.c b/auto-generated/llvm-api-tests/vmslt.c index 03f06d12e..9cd741d10 100644 --- a/auto-generated/llvm-api-tests/vmslt.c +++ b/auto-generated/llvm-api-tests/vmslt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmsltu.c b/auto-generated/llvm-api-tests/vmsltu.c index dc45d89dc..c85d44dc5 100644 --- a/auto-generated/llvm-api-tests/vmsltu.c +++ b/auto-generated/llvm-api-tests/vmsltu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmsne.c b/auto-generated/llvm-api-tests/vmsne.c index e25d3da80..a9bad979c 100644 --- a/auto-generated/llvm-api-tests/vmsne.c +++ b/auto-generated/llvm-api-tests/vmsne.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vmv.c b/auto-generated/llvm-api-tests/vmv.c index bad217d99..f6e9dbf6c 100644 --- a/auto-generated/llvm-api-tests/vmv.c +++ b/auto-generated/llvm-api-tests/vmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vneg.c b/auto-generated/llvm-api-tests/vneg.c index 9258fda61..c7d1620ba 100644 --- a/auto-generated/llvm-api-tests/vneg.c +++ b/auto-generated/llvm-api-tests/vneg.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vnmsac.c b/auto-generated/llvm-api-tests/vnmsac.c index 341eb9c12..22d5923a4 100644 --- a/auto-generated/llvm-api-tests/vnmsac.c +++ b/auto-generated/llvm-api-tests/vnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vnmsub.c b/auto-generated/llvm-api-tests/vnmsub.c index c3cabeeab..f1c18ebd1 100644 --- a/auto-generated/llvm-api-tests/vnmsub.c +++ b/auto-generated/llvm-api-tests/vnmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vreinterpret.c b/auto-generated/llvm-api-tests/vreinterpret.c index 13a9eabd3..9a79e6875 100644 --- a/auto-generated/llvm-api-tests/vreinterpret.c +++ b/auto-generated/llvm-api-tests/vreinterpret.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vrgather.c b/auto-generated/llvm-api-tests/vrgather.c index c2f4d4083..4aff52536 100644 --- a/auto-generated/llvm-api-tests/vrgather.c +++ b/auto-generated/llvm-api-tests/vrgather.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vrgatherei16.c b/auto-generated/llvm-api-tests/vrgatherei16.c index 4cd08c1d6..9ca522d0a 100644 --- a/auto-generated/llvm-api-tests/vrgatherei16.c +++ b/auto-generated/llvm-api-tests/vrgatherei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vse16.c b/auto-generated/llvm-api-tests/vse16.c index 7d2c51cbd..9cf764b55 100644 --- a/auto-generated/llvm-api-tests/vse16.c +++ b/auto-generated/llvm-api-tests/vse16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vse32.c b/auto-generated/llvm-api-tests/vse32.c index 2f8cb09b7..9bca430fe 100644 --- a/auto-generated/llvm-api-tests/vse32.c +++ b/auto-generated/llvm-api-tests/vse32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vse64.c b/auto-generated/llvm-api-tests/vse64.c index d3fcf5245..2e2abd920 100644 --- a/auto-generated/llvm-api-tests/vse64.c +++ b/auto-generated/llvm-api-tests/vse64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vset.c b/auto-generated/llvm-api-tests/vset.c index 395a8b4f5..153e2df40 100644 --- a/auto-generated/llvm-api-tests/vset.c +++ b/auto-generated/llvm-api-tests/vset.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vslidedown.c b/auto-generated/llvm-api-tests/vslidedown.c index 367703143..b90c8a496 100644 --- a/auto-generated/llvm-api-tests/vslidedown.c +++ b/auto-generated/llvm-api-tests/vslidedown.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vslideup.c b/auto-generated/llvm-api-tests/vslideup.c index 9b15794ac..881f40e7f 100644 --- a/auto-generated/llvm-api-tests/vslideup.c +++ b/auto-generated/llvm-api-tests/vslideup.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxei16.c b/auto-generated/llvm-api-tests/vsoxei16.c index 27800c50e..68516a6ac 100644 --- a/auto-generated/llvm-api-tests/vsoxei16.c +++ b/auto-generated/llvm-api-tests/vsoxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxei32.c b/auto-generated/llvm-api-tests/vsoxei32.c index b703ed6e7..9278c3de8 100644 --- a/auto-generated/llvm-api-tests/vsoxei32.c +++ b/auto-generated/llvm-api-tests/vsoxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxei64.c b/auto-generated/llvm-api-tests/vsoxei64.c index 657a1d335..02fa5dfd5 100644 --- a/auto-generated/llvm-api-tests/vsoxei64.c +++ b/auto-generated/llvm-api-tests/vsoxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxei8.c b/auto-generated/llvm-api-tests/vsoxei8.c index 3abc1c848..9e7485b89 100644 --- a/auto-generated/llvm-api-tests/vsoxei8.c +++ b/auto-generated/llvm-api-tests/vsoxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg2ei16.c b/auto-generated/llvm-api-tests/vsoxseg2ei16.c index ea3d6a207..e311e05c4 100644 --- a/auto-generated/llvm-api-tests/vsoxseg2ei16.c +++ b/auto-generated/llvm-api-tests/vsoxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg2ei32.c b/auto-generated/llvm-api-tests/vsoxseg2ei32.c index abe4f11c2..389592f12 100644 --- a/auto-generated/llvm-api-tests/vsoxseg2ei32.c +++ b/auto-generated/llvm-api-tests/vsoxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg2ei64.c b/auto-generated/llvm-api-tests/vsoxseg2ei64.c index 2e17f7b09..699bb592d 100644 --- a/auto-generated/llvm-api-tests/vsoxseg2ei64.c +++ b/auto-generated/llvm-api-tests/vsoxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg2ei8.c b/auto-generated/llvm-api-tests/vsoxseg2ei8.c index 42555f84f..c62efac46 100644 --- a/auto-generated/llvm-api-tests/vsoxseg2ei8.c +++ b/auto-generated/llvm-api-tests/vsoxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg3ei16.c b/auto-generated/llvm-api-tests/vsoxseg3ei16.c index a7822cc70..bb00635a2 100644 --- a/auto-generated/llvm-api-tests/vsoxseg3ei16.c +++ b/auto-generated/llvm-api-tests/vsoxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg3ei32.c b/auto-generated/llvm-api-tests/vsoxseg3ei32.c index 5b2284115..1e5886b31 100644 --- a/auto-generated/llvm-api-tests/vsoxseg3ei32.c +++ b/auto-generated/llvm-api-tests/vsoxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg3ei64.c b/auto-generated/llvm-api-tests/vsoxseg3ei64.c index 585aac02c..200bd7dd7 100644 --- a/auto-generated/llvm-api-tests/vsoxseg3ei64.c +++ b/auto-generated/llvm-api-tests/vsoxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg3ei8.c b/auto-generated/llvm-api-tests/vsoxseg3ei8.c index a3cc915ec..b661e57ed 100644 --- a/auto-generated/llvm-api-tests/vsoxseg3ei8.c +++ b/auto-generated/llvm-api-tests/vsoxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg4ei16.c b/auto-generated/llvm-api-tests/vsoxseg4ei16.c index ab59b875a..1fb523444 100644 --- a/auto-generated/llvm-api-tests/vsoxseg4ei16.c +++ b/auto-generated/llvm-api-tests/vsoxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg4ei32.c b/auto-generated/llvm-api-tests/vsoxseg4ei32.c index dc6b8b51d..1790be30e 100644 --- a/auto-generated/llvm-api-tests/vsoxseg4ei32.c +++ b/auto-generated/llvm-api-tests/vsoxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg4ei64.c b/auto-generated/llvm-api-tests/vsoxseg4ei64.c index 005f96acc..11cfc3c5c 100644 --- a/auto-generated/llvm-api-tests/vsoxseg4ei64.c +++ b/auto-generated/llvm-api-tests/vsoxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg4ei8.c b/auto-generated/llvm-api-tests/vsoxseg4ei8.c index 7de140742..382a5f493 100644 --- a/auto-generated/llvm-api-tests/vsoxseg4ei8.c +++ b/auto-generated/llvm-api-tests/vsoxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg5ei16.c b/auto-generated/llvm-api-tests/vsoxseg5ei16.c index 14e190942..fe11d58b0 100644 --- a/auto-generated/llvm-api-tests/vsoxseg5ei16.c +++ b/auto-generated/llvm-api-tests/vsoxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg5ei32.c b/auto-generated/llvm-api-tests/vsoxseg5ei32.c index ad7af845a..06ca422ee 100644 --- a/auto-generated/llvm-api-tests/vsoxseg5ei32.c +++ b/auto-generated/llvm-api-tests/vsoxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg5ei64.c b/auto-generated/llvm-api-tests/vsoxseg5ei64.c index f9c22a039..d23cce915 100644 --- a/auto-generated/llvm-api-tests/vsoxseg5ei64.c +++ b/auto-generated/llvm-api-tests/vsoxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg5ei8.c b/auto-generated/llvm-api-tests/vsoxseg5ei8.c index 81a77c392..72bcb4b17 100644 --- a/auto-generated/llvm-api-tests/vsoxseg5ei8.c +++ b/auto-generated/llvm-api-tests/vsoxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg6ei16.c b/auto-generated/llvm-api-tests/vsoxseg6ei16.c index 1aa73b092..0b4a88b7d 100644 --- a/auto-generated/llvm-api-tests/vsoxseg6ei16.c +++ b/auto-generated/llvm-api-tests/vsoxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg6ei32.c b/auto-generated/llvm-api-tests/vsoxseg6ei32.c index 7f4e8f47f..ce342bb33 100644 --- a/auto-generated/llvm-api-tests/vsoxseg6ei32.c +++ b/auto-generated/llvm-api-tests/vsoxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg6ei64.c b/auto-generated/llvm-api-tests/vsoxseg6ei64.c index 3ed2522a7..6becd3572 100644 --- a/auto-generated/llvm-api-tests/vsoxseg6ei64.c +++ b/auto-generated/llvm-api-tests/vsoxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg6ei8.c b/auto-generated/llvm-api-tests/vsoxseg6ei8.c index 0b4cd1ef4..dc6929aae 100644 --- a/auto-generated/llvm-api-tests/vsoxseg6ei8.c +++ b/auto-generated/llvm-api-tests/vsoxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg7ei16.c b/auto-generated/llvm-api-tests/vsoxseg7ei16.c index de57c3f10..07735e79e 100644 --- a/auto-generated/llvm-api-tests/vsoxseg7ei16.c +++ b/auto-generated/llvm-api-tests/vsoxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg7ei32.c b/auto-generated/llvm-api-tests/vsoxseg7ei32.c index 5873913fe..ae4cc6404 100644 --- a/auto-generated/llvm-api-tests/vsoxseg7ei32.c +++ b/auto-generated/llvm-api-tests/vsoxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg7ei64.c b/auto-generated/llvm-api-tests/vsoxseg7ei64.c index 4fb97bee8..c656eb65c 100644 --- a/auto-generated/llvm-api-tests/vsoxseg7ei64.c +++ b/auto-generated/llvm-api-tests/vsoxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg7ei8.c b/auto-generated/llvm-api-tests/vsoxseg7ei8.c index 46d3f71a8..20c1ae546 100644 --- a/auto-generated/llvm-api-tests/vsoxseg7ei8.c +++ b/auto-generated/llvm-api-tests/vsoxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg8ei16.c b/auto-generated/llvm-api-tests/vsoxseg8ei16.c index b27b5dc6b..b91471794 100644 --- a/auto-generated/llvm-api-tests/vsoxseg8ei16.c +++ b/auto-generated/llvm-api-tests/vsoxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg8ei32.c b/auto-generated/llvm-api-tests/vsoxseg8ei32.c index 19eb01e8e..9fe682dd8 100644 --- a/auto-generated/llvm-api-tests/vsoxseg8ei32.c +++ b/auto-generated/llvm-api-tests/vsoxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg8ei64.c b/auto-generated/llvm-api-tests/vsoxseg8ei64.c index 59baadc85..6282ff055 100644 --- a/auto-generated/llvm-api-tests/vsoxseg8ei64.c +++ b/auto-generated/llvm-api-tests/vsoxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsoxseg8ei8.c b/auto-generated/llvm-api-tests/vsoxseg8ei8.c index 12b437606..2d1812b6c 100644 --- a/auto-generated/llvm-api-tests/vsoxseg8ei8.c +++ b/auto-generated/llvm-api-tests/vsoxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsse16.c b/auto-generated/llvm-api-tests/vsse16.c index 60fa72fe0..61be15df9 100644 --- a/auto-generated/llvm-api-tests/vsse16.c +++ b/auto-generated/llvm-api-tests/vsse16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsse32.c b/auto-generated/llvm-api-tests/vsse32.c index efe3cc2dc..3ac9c14c0 100644 --- a/auto-generated/llvm-api-tests/vsse32.c +++ b/auto-generated/llvm-api-tests/vsse32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsse64.c b/auto-generated/llvm-api-tests/vsse64.c index 744cc61bc..fa6d29fdf 100644 --- a/auto-generated/llvm-api-tests/vsse64.c +++ b/auto-generated/llvm-api-tests/vsse64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg2e16.c b/auto-generated/llvm-api-tests/vsseg2e16.c index 944f285d9..a3f9455af 100644 --- a/auto-generated/llvm-api-tests/vsseg2e16.c +++ b/auto-generated/llvm-api-tests/vsseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg2e32.c b/auto-generated/llvm-api-tests/vsseg2e32.c index 8a1b08f87..9e01865e1 100644 --- a/auto-generated/llvm-api-tests/vsseg2e32.c +++ b/auto-generated/llvm-api-tests/vsseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg2e64.c b/auto-generated/llvm-api-tests/vsseg2e64.c index d873ca18c..7c08081af 100644 --- a/auto-generated/llvm-api-tests/vsseg2e64.c +++ b/auto-generated/llvm-api-tests/vsseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg3e16.c b/auto-generated/llvm-api-tests/vsseg3e16.c index 51094bd6a..4fa80c344 100644 --- a/auto-generated/llvm-api-tests/vsseg3e16.c +++ b/auto-generated/llvm-api-tests/vsseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg3e32.c b/auto-generated/llvm-api-tests/vsseg3e32.c index 7205e0abd..9d42f35a1 100644 --- a/auto-generated/llvm-api-tests/vsseg3e32.c +++ b/auto-generated/llvm-api-tests/vsseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg3e64.c b/auto-generated/llvm-api-tests/vsseg3e64.c index b85c443c8..719f8e39d 100644 --- a/auto-generated/llvm-api-tests/vsseg3e64.c +++ b/auto-generated/llvm-api-tests/vsseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg4e16.c b/auto-generated/llvm-api-tests/vsseg4e16.c index d8e04d908..1dffa5926 100644 --- a/auto-generated/llvm-api-tests/vsseg4e16.c +++ b/auto-generated/llvm-api-tests/vsseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg4e32.c b/auto-generated/llvm-api-tests/vsseg4e32.c index 982a01027..3f736f3bc 100644 --- a/auto-generated/llvm-api-tests/vsseg4e32.c +++ b/auto-generated/llvm-api-tests/vsseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg4e64.c b/auto-generated/llvm-api-tests/vsseg4e64.c index b68040ff1..2b5b81b14 100644 --- a/auto-generated/llvm-api-tests/vsseg4e64.c +++ b/auto-generated/llvm-api-tests/vsseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg5e16.c b/auto-generated/llvm-api-tests/vsseg5e16.c index e0741b425..c2c625cb4 100644 --- a/auto-generated/llvm-api-tests/vsseg5e16.c +++ b/auto-generated/llvm-api-tests/vsseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg5e32.c b/auto-generated/llvm-api-tests/vsseg5e32.c index 017ba99cf..31f343ce2 100644 --- a/auto-generated/llvm-api-tests/vsseg5e32.c +++ b/auto-generated/llvm-api-tests/vsseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg5e64.c b/auto-generated/llvm-api-tests/vsseg5e64.c index a0bb07a05..b3ffe050b 100644 --- a/auto-generated/llvm-api-tests/vsseg5e64.c +++ b/auto-generated/llvm-api-tests/vsseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg6e16.c b/auto-generated/llvm-api-tests/vsseg6e16.c index 0cad15508..30de76dfe 100644 --- a/auto-generated/llvm-api-tests/vsseg6e16.c +++ b/auto-generated/llvm-api-tests/vsseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg6e32.c b/auto-generated/llvm-api-tests/vsseg6e32.c index 295229185..1bffcfbe6 100644 --- a/auto-generated/llvm-api-tests/vsseg6e32.c +++ b/auto-generated/llvm-api-tests/vsseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg6e64.c b/auto-generated/llvm-api-tests/vsseg6e64.c index cad51d893..9216f5dd6 100644 --- a/auto-generated/llvm-api-tests/vsseg6e64.c +++ b/auto-generated/llvm-api-tests/vsseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg7e16.c b/auto-generated/llvm-api-tests/vsseg7e16.c index 7c364febe..97e5a4534 100644 --- a/auto-generated/llvm-api-tests/vsseg7e16.c +++ b/auto-generated/llvm-api-tests/vsseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg7e32.c b/auto-generated/llvm-api-tests/vsseg7e32.c index 0d7118392..fd02e9698 100644 --- a/auto-generated/llvm-api-tests/vsseg7e32.c +++ b/auto-generated/llvm-api-tests/vsseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg7e64.c b/auto-generated/llvm-api-tests/vsseg7e64.c index 8875951aa..6c4a5eb60 100644 --- a/auto-generated/llvm-api-tests/vsseg7e64.c +++ b/auto-generated/llvm-api-tests/vsseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg8e16.c b/auto-generated/llvm-api-tests/vsseg8e16.c index bc1567f8f..4f0a86023 100644 --- a/auto-generated/llvm-api-tests/vsseg8e16.c +++ b/auto-generated/llvm-api-tests/vsseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg8e32.c b/auto-generated/llvm-api-tests/vsseg8e32.c index 582690c4b..f07e04721 100644 --- a/auto-generated/llvm-api-tests/vsseg8e32.c +++ b/auto-generated/llvm-api-tests/vsseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsseg8e64.c b/auto-generated/llvm-api-tests/vsseg8e64.c index 16011fcde..18a8b2359 100644 --- a/auto-generated/llvm-api-tests/vsseg8e64.c +++ b/auto-generated/llvm-api-tests/vsseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg2e16.c b/auto-generated/llvm-api-tests/vssseg2e16.c index 796f1abbb..3ec10f78c 100644 --- a/auto-generated/llvm-api-tests/vssseg2e16.c +++ b/auto-generated/llvm-api-tests/vssseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg2e32.c b/auto-generated/llvm-api-tests/vssseg2e32.c index f31322ca8..d8ae0594e 100644 --- a/auto-generated/llvm-api-tests/vssseg2e32.c +++ b/auto-generated/llvm-api-tests/vssseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg2e64.c b/auto-generated/llvm-api-tests/vssseg2e64.c index 7ac6fae76..5f4cd4e5d 100644 --- a/auto-generated/llvm-api-tests/vssseg2e64.c +++ b/auto-generated/llvm-api-tests/vssseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg3e16.c b/auto-generated/llvm-api-tests/vssseg3e16.c index d79a571c6..2f6c5f594 100644 --- a/auto-generated/llvm-api-tests/vssseg3e16.c +++ b/auto-generated/llvm-api-tests/vssseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg3e32.c b/auto-generated/llvm-api-tests/vssseg3e32.c index 97c04158e..81b22c279 100644 --- a/auto-generated/llvm-api-tests/vssseg3e32.c +++ b/auto-generated/llvm-api-tests/vssseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg3e64.c b/auto-generated/llvm-api-tests/vssseg3e64.c index 0ed0a191a..28a3e75c6 100644 --- a/auto-generated/llvm-api-tests/vssseg3e64.c +++ b/auto-generated/llvm-api-tests/vssseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg4e16.c b/auto-generated/llvm-api-tests/vssseg4e16.c index 4c4fbc9ab..c4a384df3 100644 --- a/auto-generated/llvm-api-tests/vssseg4e16.c +++ b/auto-generated/llvm-api-tests/vssseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg4e32.c b/auto-generated/llvm-api-tests/vssseg4e32.c index 62519d6c0..a569e8f94 100644 --- a/auto-generated/llvm-api-tests/vssseg4e32.c +++ b/auto-generated/llvm-api-tests/vssseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg4e64.c b/auto-generated/llvm-api-tests/vssseg4e64.c index 94263098f..01100f81f 100644 --- a/auto-generated/llvm-api-tests/vssseg4e64.c +++ b/auto-generated/llvm-api-tests/vssseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg5e16.c b/auto-generated/llvm-api-tests/vssseg5e16.c index b545fa5ee..e95ab2fc1 100644 --- a/auto-generated/llvm-api-tests/vssseg5e16.c +++ b/auto-generated/llvm-api-tests/vssseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg5e32.c b/auto-generated/llvm-api-tests/vssseg5e32.c index 2a468db9c..6e7d71dd9 100644 --- a/auto-generated/llvm-api-tests/vssseg5e32.c +++ b/auto-generated/llvm-api-tests/vssseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg5e64.c b/auto-generated/llvm-api-tests/vssseg5e64.c index b0cf77e09..1e046cae4 100644 --- a/auto-generated/llvm-api-tests/vssseg5e64.c +++ b/auto-generated/llvm-api-tests/vssseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg6e16.c b/auto-generated/llvm-api-tests/vssseg6e16.c index 2eebc9a96..5e33a8b2d 100644 --- a/auto-generated/llvm-api-tests/vssseg6e16.c +++ b/auto-generated/llvm-api-tests/vssseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg6e32.c b/auto-generated/llvm-api-tests/vssseg6e32.c index 1eaff86c1..42c296f8c 100644 --- a/auto-generated/llvm-api-tests/vssseg6e32.c +++ b/auto-generated/llvm-api-tests/vssseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg6e64.c b/auto-generated/llvm-api-tests/vssseg6e64.c index 3500d6ba5..2fc7c5f8b 100644 --- a/auto-generated/llvm-api-tests/vssseg6e64.c +++ b/auto-generated/llvm-api-tests/vssseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg7e16.c b/auto-generated/llvm-api-tests/vssseg7e16.c index a81b10970..13481da41 100644 --- a/auto-generated/llvm-api-tests/vssseg7e16.c +++ b/auto-generated/llvm-api-tests/vssseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg7e32.c b/auto-generated/llvm-api-tests/vssseg7e32.c index bfd26d03a..925d39d50 100644 --- a/auto-generated/llvm-api-tests/vssseg7e32.c +++ b/auto-generated/llvm-api-tests/vssseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg7e64.c b/auto-generated/llvm-api-tests/vssseg7e64.c index 931088aa0..793d50ad7 100644 --- a/auto-generated/llvm-api-tests/vssseg7e64.c +++ b/auto-generated/llvm-api-tests/vssseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg8e16.c b/auto-generated/llvm-api-tests/vssseg8e16.c index 6cdce9867..20c5c7839 100644 --- a/auto-generated/llvm-api-tests/vssseg8e16.c +++ b/auto-generated/llvm-api-tests/vssseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg8e32.c b/auto-generated/llvm-api-tests/vssseg8e32.c index ba8b10105..d3677da68 100644 --- a/auto-generated/llvm-api-tests/vssseg8e32.c +++ b/auto-generated/llvm-api-tests/vssseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vssseg8e64.c b/auto-generated/llvm-api-tests/vssseg8e64.c index 199b8141b..64620fe05 100644 --- a/auto-generated/llvm-api-tests/vssseg8e64.c +++ b/auto-generated/llvm-api-tests/vssseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxei16.c b/auto-generated/llvm-api-tests/vsuxei16.c index be0c6234f..ee7cdb5f1 100644 --- a/auto-generated/llvm-api-tests/vsuxei16.c +++ b/auto-generated/llvm-api-tests/vsuxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxei32.c b/auto-generated/llvm-api-tests/vsuxei32.c index 387fd9943..14c07e591 100644 --- a/auto-generated/llvm-api-tests/vsuxei32.c +++ b/auto-generated/llvm-api-tests/vsuxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxei64.c b/auto-generated/llvm-api-tests/vsuxei64.c index 6bcb56a76..f9802c574 100644 --- a/auto-generated/llvm-api-tests/vsuxei64.c +++ b/auto-generated/llvm-api-tests/vsuxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxei8.c b/auto-generated/llvm-api-tests/vsuxei8.c index 2defa57be..4a037d151 100644 --- a/auto-generated/llvm-api-tests/vsuxei8.c +++ b/auto-generated/llvm-api-tests/vsuxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg2ei16.c b/auto-generated/llvm-api-tests/vsuxseg2ei16.c index e9b52cdc9..74f74325c 100644 --- a/auto-generated/llvm-api-tests/vsuxseg2ei16.c +++ b/auto-generated/llvm-api-tests/vsuxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg2ei32.c b/auto-generated/llvm-api-tests/vsuxseg2ei32.c index ac6d49136..b013264e3 100644 --- a/auto-generated/llvm-api-tests/vsuxseg2ei32.c +++ b/auto-generated/llvm-api-tests/vsuxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg2ei64.c b/auto-generated/llvm-api-tests/vsuxseg2ei64.c index 842fc7fca..1266d496e 100644 --- a/auto-generated/llvm-api-tests/vsuxseg2ei64.c +++ b/auto-generated/llvm-api-tests/vsuxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg2ei8.c b/auto-generated/llvm-api-tests/vsuxseg2ei8.c index 597b8f1fd..8e6973d4e 100644 --- a/auto-generated/llvm-api-tests/vsuxseg2ei8.c +++ b/auto-generated/llvm-api-tests/vsuxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg3ei16.c b/auto-generated/llvm-api-tests/vsuxseg3ei16.c index adf729080..e20f72949 100644 --- a/auto-generated/llvm-api-tests/vsuxseg3ei16.c +++ b/auto-generated/llvm-api-tests/vsuxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg3ei32.c b/auto-generated/llvm-api-tests/vsuxseg3ei32.c index ff88b75b1..ec59086c9 100644 --- a/auto-generated/llvm-api-tests/vsuxseg3ei32.c +++ b/auto-generated/llvm-api-tests/vsuxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg3ei64.c b/auto-generated/llvm-api-tests/vsuxseg3ei64.c index 524e5a2b0..bbe214e93 100644 --- a/auto-generated/llvm-api-tests/vsuxseg3ei64.c +++ b/auto-generated/llvm-api-tests/vsuxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg3ei8.c b/auto-generated/llvm-api-tests/vsuxseg3ei8.c index ecb43ceca..d11704c1e 100644 --- a/auto-generated/llvm-api-tests/vsuxseg3ei8.c +++ b/auto-generated/llvm-api-tests/vsuxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg4ei16.c b/auto-generated/llvm-api-tests/vsuxseg4ei16.c index b8ab0207b..7b154a033 100644 --- a/auto-generated/llvm-api-tests/vsuxseg4ei16.c +++ b/auto-generated/llvm-api-tests/vsuxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg4ei32.c b/auto-generated/llvm-api-tests/vsuxseg4ei32.c index a5ad387a0..b3478e0c0 100644 --- a/auto-generated/llvm-api-tests/vsuxseg4ei32.c +++ b/auto-generated/llvm-api-tests/vsuxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg4ei64.c b/auto-generated/llvm-api-tests/vsuxseg4ei64.c index 97fa6e560..a5c5c80c9 100644 --- a/auto-generated/llvm-api-tests/vsuxseg4ei64.c +++ b/auto-generated/llvm-api-tests/vsuxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg4ei8.c b/auto-generated/llvm-api-tests/vsuxseg4ei8.c index 0c3f9f9a8..1cbf72f08 100644 --- a/auto-generated/llvm-api-tests/vsuxseg4ei8.c +++ b/auto-generated/llvm-api-tests/vsuxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg5ei16.c b/auto-generated/llvm-api-tests/vsuxseg5ei16.c index c7d2c130f..37b7d69d1 100644 --- a/auto-generated/llvm-api-tests/vsuxseg5ei16.c +++ b/auto-generated/llvm-api-tests/vsuxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg5ei32.c b/auto-generated/llvm-api-tests/vsuxseg5ei32.c index 0757324db..d33446d6a 100644 --- a/auto-generated/llvm-api-tests/vsuxseg5ei32.c +++ b/auto-generated/llvm-api-tests/vsuxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg5ei64.c b/auto-generated/llvm-api-tests/vsuxseg5ei64.c index df3e43802..21f90c1f3 100644 --- a/auto-generated/llvm-api-tests/vsuxseg5ei64.c +++ b/auto-generated/llvm-api-tests/vsuxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg5ei8.c b/auto-generated/llvm-api-tests/vsuxseg5ei8.c index cca7d701c..6a5a0f0cb 100644 --- a/auto-generated/llvm-api-tests/vsuxseg5ei8.c +++ b/auto-generated/llvm-api-tests/vsuxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg6ei16.c b/auto-generated/llvm-api-tests/vsuxseg6ei16.c index af12d0fb1..c7d58f9c3 100644 --- a/auto-generated/llvm-api-tests/vsuxseg6ei16.c +++ b/auto-generated/llvm-api-tests/vsuxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg6ei32.c b/auto-generated/llvm-api-tests/vsuxseg6ei32.c index 521498045..c4e14b517 100644 --- a/auto-generated/llvm-api-tests/vsuxseg6ei32.c +++ b/auto-generated/llvm-api-tests/vsuxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg6ei64.c b/auto-generated/llvm-api-tests/vsuxseg6ei64.c index 1186d1f6f..68f194b7e 100644 --- a/auto-generated/llvm-api-tests/vsuxseg6ei64.c +++ b/auto-generated/llvm-api-tests/vsuxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg6ei8.c b/auto-generated/llvm-api-tests/vsuxseg6ei8.c index 2b278a6c5..c58b1adcd 100644 --- a/auto-generated/llvm-api-tests/vsuxseg6ei8.c +++ b/auto-generated/llvm-api-tests/vsuxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg7ei16.c b/auto-generated/llvm-api-tests/vsuxseg7ei16.c index 57301c525..696d11270 100644 --- a/auto-generated/llvm-api-tests/vsuxseg7ei16.c +++ b/auto-generated/llvm-api-tests/vsuxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg7ei32.c b/auto-generated/llvm-api-tests/vsuxseg7ei32.c index d7f93299a..e1016eeb9 100644 --- a/auto-generated/llvm-api-tests/vsuxseg7ei32.c +++ b/auto-generated/llvm-api-tests/vsuxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg7ei64.c b/auto-generated/llvm-api-tests/vsuxseg7ei64.c index 002507654..b1074ad21 100644 --- a/auto-generated/llvm-api-tests/vsuxseg7ei64.c +++ b/auto-generated/llvm-api-tests/vsuxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg7ei8.c b/auto-generated/llvm-api-tests/vsuxseg7ei8.c index fa2e8a9d7..b5a9c35a1 100644 --- a/auto-generated/llvm-api-tests/vsuxseg7ei8.c +++ b/auto-generated/llvm-api-tests/vsuxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg8ei16.c b/auto-generated/llvm-api-tests/vsuxseg8ei16.c index 304d2edb1..f085e63ef 100644 --- a/auto-generated/llvm-api-tests/vsuxseg8ei16.c +++ b/auto-generated/llvm-api-tests/vsuxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg8ei32.c b/auto-generated/llvm-api-tests/vsuxseg8ei32.c index fedc7ff6a..e8493ff88 100644 --- a/auto-generated/llvm-api-tests/vsuxseg8ei32.c +++ b/auto-generated/llvm-api-tests/vsuxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg8ei64.c b/auto-generated/llvm-api-tests/vsuxseg8ei64.c index b1fc4b44a..12a3871a3 100644 --- a/auto-generated/llvm-api-tests/vsuxseg8ei64.c +++ b/auto-generated/llvm-api-tests/vsuxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vsuxseg8ei8.c b/auto-generated/llvm-api-tests/vsuxseg8ei8.c index c257d7173..f891c5967 100644 --- a/auto-generated/llvm-api-tests/vsuxseg8ei8.c +++ b/auto-generated/llvm-api-tests/vsuxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vundefined.c b/auto-generated/llvm-api-tests/vundefined.c index f7b50440b..a3460c1fd 100644 --- a/auto-generated/llvm-api-tests/vundefined.c +++ b/auto-generated/llvm-api-tests/vundefined.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vwmacc.c b/auto-generated/llvm-api-tests/vwmacc.c index 5a0042ebc..6e2e9d32f 100644 --- a/auto-generated/llvm-api-tests/vwmacc.c +++ b/auto-generated/llvm-api-tests/vwmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vwmaccsu.c b/auto-generated/llvm-api-tests/vwmaccsu.c index c173570f9..3596e6de1 100644 --- a/auto-generated/llvm-api-tests/vwmaccsu.c +++ b/auto-generated/llvm-api-tests/vwmaccsu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vwmaccu.c b/auto-generated/llvm-api-tests/vwmaccu.c index 6a30fe830..63d391940 100644 --- a/auto-generated/llvm-api-tests/vwmaccu.c +++ b/auto-generated/llvm-api-tests/vwmaccu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-api-tests/vwmaccus.c b/auto-generated/llvm-api-tests/vwmaccus.c index 0767086af..4d8024f46 100644 --- a/auto-generated/llvm-api-tests/vwmaccus.c +++ b/auto-generated/llvm-api-tests/vwmaccus.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vcompress.c b/auto-generated/llvm-overloaded-tests/vcompress.c index a4512fb8d..4df3040c4 100644 --- a/auto-generated/llvm-overloaded-tests/vcompress.c +++ b/auto-generated/llvm-overloaded-tests/vcompress.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vcpop.c b/auto-generated/llvm-overloaded-tests/vcpop.c index 1735b9838..398d61799 100644 --- a/auto-generated/llvm-overloaded-tests/vcpop.c +++ b/auto-generated/llvm-overloaded-tests/vcpop.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfabs.c b/auto-generated/llvm-overloaded-tests/vfabs.c index a2c4d5442..d6393b0a4 100644 --- a/auto-generated/llvm-overloaded-tests/vfabs.c +++ b/auto-generated/llvm-overloaded-tests/vfabs.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfadd.c b/auto-generated/llvm-overloaded-tests/vfadd.c index bd8693ea3..7ae285433 100644 --- a/auto-generated/llvm-overloaded-tests/vfadd.c +++ b/auto-generated/llvm-overloaded-tests/vfadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfclass.c b/auto-generated/llvm-overloaded-tests/vfclass.c index bb71c8010..463428175 100644 --- a/auto-generated/llvm-overloaded-tests/vfclass.c +++ b/auto-generated/llvm-overloaded-tests/vfclass.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfcvt.c b/auto-generated/llvm-overloaded-tests/vfcvt.c index a3b886e48..1f6a77c10 100644 --- a/auto-generated/llvm-overloaded-tests/vfcvt.c +++ b/auto-generated/llvm-overloaded-tests/vfcvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfcvt_rtz.c b/auto-generated/llvm-overloaded-tests/vfcvt_rtz.c index 14aa62562..3fc8c5210 100644 --- a/auto-generated/llvm-overloaded-tests/vfcvt_rtz.c +++ b/auto-generated/llvm-overloaded-tests/vfcvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfdiv.c b/auto-generated/llvm-overloaded-tests/vfdiv.c index 540a49829..47341253d 100644 --- a/auto-generated/llvm-overloaded-tests/vfdiv.c +++ b/auto-generated/llvm-overloaded-tests/vfdiv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmacc.c b/auto-generated/llvm-overloaded-tests/vfmacc.c index 45a5a0595..4940a63b2 100644 --- a/auto-generated/llvm-overloaded-tests/vfmacc.c +++ b/auto-generated/llvm-overloaded-tests/vfmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmadd.c b/auto-generated/llvm-overloaded-tests/vfmadd.c index 7654e9deb..5d6d7950b 100644 --- a/auto-generated/llvm-overloaded-tests/vfmadd.c +++ b/auto-generated/llvm-overloaded-tests/vfmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmax.c b/auto-generated/llvm-overloaded-tests/vfmax.c index e9640db4f..201526fd7 100644 --- a/auto-generated/llvm-overloaded-tests/vfmax.c +++ b/auto-generated/llvm-overloaded-tests/vfmax.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmerge.c b/auto-generated/llvm-overloaded-tests/vfmerge.c index d56efe29e..9c3c75165 100644 --- a/auto-generated/llvm-overloaded-tests/vfmerge.c +++ b/auto-generated/llvm-overloaded-tests/vfmerge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmin.c b/auto-generated/llvm-overloaded-tests/vfmin.c index 71e2667e2..60858d911 100644 --- a/auto-generated/llvm-overloaded-tests/vfmin.c +++ b/auto-generated/llvm-overloaded-tests/vfmin.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmsac.c b/auto-generated/llvm-overloaded-tests/vfmsac.c index a9d3ebc37..13a86a992 100644 --- a/auto-generated/llvm-overloaded-tests/vfmsac.c +++ b/auto-generated/llvm-overloaded-tests/vfmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmsub.c b/auto-generated/llvm-overloaded-tests/vfmsub.c index d72ec3437..95ca69593 100644 --- a/auto-generated/llvm-overloaded-tests/vfmsub.c +++ b/auto-generated/llvm-overloaded-tests/vfmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmul.c b/auto-generated/llvm-overloaded-tests/vfmul.c index e888bfb60..eba9a536f 100644 --- a/auto-generated/llvm-overloaded-tests/vfmul.c +++ b/auto-generated/llvm-overloaded-tests/vfmul.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfmv.c b/auto-generated/llvm-overloaded-tests/vfmv.c index 135004709..6d3a56d2d 100644 --- a/auto-generated/llvm-overloaded-tests/vfmv.c +++ b/auto-generated/llvm-overloaded-tests/vfmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfncvt.c b/auto-generated/llvm-overloaded-tests/vfncvt.c index 35ea978f6..3dc050ce1 100644 --- a/auto-generated/llvm-overloaded-tests/vfncvt.c +++ b/auto-generated/llvm-overloaded-tests/vfncvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfncvt_rod.c b/auto-generated/llvm-overloaded-tests/vfncvt_rod.c index a3711225c..2952eb3fe 100644 --- a/auto-generated/llvm-overloaded-tests/vfncvt_rod.c +++ b/auto-generated/llvm-overloaded-tests/vfncvt_rod.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfncvt_rtz.c b/auto-generated/llvm-overloaded-tests/vfncvt_rtz.c index 915b2adce..cf744436c 100644 --- a/auto-generated/llvm-overloaded-tests/vfncvt_rtz.c +++ b/auto-generated/llvm-overloaded-tests/vfncvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfneg.c b/auto-generated/llvm-overloaded-tests/vfneg.c index b52ab210e..de124620a 100644 --- a/auto-generated/llvm-overloaded-tests/vfneg.c +++ b/auto-generated/llvm-overloaded-tests/vfneg.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfnmacc.c b/auto-generated/llvm-overloaded-tests/vfnmacc.c index 7178799d2..72f022079 100644 --- a/auto-generated/llvm-overloaded-tests/vfnmacc.c +++ b/auto-generated/llvm-overloaded-tests/vfnmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfnmadd.c b/auto-generated/llvm-overloaded-tests/vfnmadd.c index 877783403..80a7867c4 100644 --- a/auto-generated/llvm-overloaded-tests/vfnmadd.c +++ b/auto-generated/llvm-overloaded-tests/vfnmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfnmsac.c b/auto-generated/llvm-overloaded-tests/vfnmsac.c index ddefdf376..41b00f277 100644 --- a/auto-generated/llvm-overloaded-tests/vfnmsac.c +++ b/auto-generated/llvm-overloaded-tests/vfnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfnmsub.c b/auto-generated/llvm-overloaded-tests/vfnmsub.c index 544253fad..dd5e84792 100644 --- a/auto-generated/llvm-overloaded-tests/vfnmsub.c +++ b/auto-generated/llvm-overloaded-tests/vfnmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfrdiv.c b/auto-generated/llvm-overloaded-tests/vfrdiv.c index 5d9a182f3..1870ff88b 100644 --- a/auto-generated/llvm-overloaded-tests/vfrdiv.c +++ b/auto-generated/llvm-overloaded-tests/vfrdiv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfrec7.c b/auto-generated/llvm-overloaded-tests/vfrec7.c index e92a5277e..1605f6ed5 100644 --- a/auto-generated/llvm-overloaded-tests/vfrec7.c +++ b/auto-generated/llvm-overloaded-tests/vfrec7.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfredmax.c b/auto-generated/llvm-overloaded-tests/vfredmax.c index c735c8ebd..13142afdd 100644 --- a/auto-generated/llvm-overloaded-tests/vfredmax.c +++ b/auto-generated/llvm-overloaded-tests/vfredmax.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfredmin.c b/auto-generated/llvm-overloaded-tests/vfredmin.c index 66cbbb40e..34d03f78f 100644 --- a/auto-generated/llvm-overloaded-tests/vfredmin.c +++ b/auto-generated/llvm-overloaded-tests/vfredmin.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfredosum.c b/auto-generated/llvm-overloaded-tests/vfredosum.c index d6153aadf..255cbf849 100644 --- a/auto-generated/llvm-overloaded-tests/vfredosum.c +++ b/auto-generated/llvm-overloaded-tests/vfredosum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfredusum.c b/auto-generated/llvm-overloaded-tests/vfredusum.c index 5033c2df1..07470d588 100644 --- a/auto-generated/llvm-overloaded-tests/vfredusum.c +++ b/auto-generated/llvm-overloaded-tests/vfredusum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfrsqrt7.c b/auto-generated/llvm-overloaded-tests/vfrsqrt7.c index dd3962142..6768629df 100644 --- a/auto-generated/llvm-overloaded-tests/vfrsqrt7.c +++ b/auto-generated/llvm-overloaded-tests/vfrsqrt7.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfrsub.c b/auto-generated/llvm-overloaded-tests/vfrsub.c index 61765630a..800dc6450 100644 --- a/auto-generated/llvm-overloaded-tests/vfrsub.c +++ b/auto-generated/llvm-overloaded-tests/vfrsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfsgnj.c b/auto-generated/llvm-overloaded-tests/vfsgnj.c index 7ed26f68a..a7bbcff28 100644 --- a/auto-generated/llvm-overloaded-tests/vfsgnj.c +++ b/auto-generated/llvm-overloaded-tests/vfsgnj.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfsgnjn.c b/auto-generated/llvm-overloaded-tests/vfsgnjn.c index 65d68509f..7d7af786e 100644 --- a/auto-generated/llvm-overloaded-tests/vfsgnjn.c +++ b/auto-generated/llvm-overloaded-tests/vfsgnjn.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfsgnjx.c b/auto-generated/llvm-overloaded-tests/vfsgnjx.c index bf97feddc..83660cd22 100644 --- a/auto-generated/llvm-overloaded-tests/vfsgnjx.c +++ b/auto-generated/llvm-overloaded-tests/vfsgnjx.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfslide1down.c b/auto-generated/llvm-overloaded-tests/vfslide1down.c index 2f4aeae73..47d050d49 100644 --- a/auto-generated/llvm-overloaded-tests/vfslide1down.c +++ b/auto-generated/llvm-overloaded-tests/vfslide1down.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfslide1up.c b/auto-generated/llvm-overloaded-tests/vfslide1up.c index 909bbd433..c0f1cfacc 100644 --- a/auto-generated/llvm-overloaded-tests/vfslide1up.c +++ b/auto-generated/llvm-overloaded-tests/vfslide1up.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfsqrt.c b/auto-generated/llvm-overloaded-tests/vfsqrt.c index 26259bbd3..73d525ff0 100644 --- a/auto-generated/llvm-overloaded-tests/vfsqrt.c +++ b/auto-generated/llvm-overloaded-tests/vfsqrt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfsub.c b/auto-generated/llvm-overloaded-tests/vfsub.c index 813c0cba4..8528931f3 100644 --- a/auto-generated/llvm-overloaded-tests/vfsub.c +++ b/auto-generated/llvm-overloaded-tests/vfsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwadd.c b/auto-generated/llvm-overloaded-tests/vfwadd.c index 341d12742..95f8f704a 100644 --- a/auto-generated/llvm-overloaded-tests/vfwadd.c +++ b/auto-generated/llvm-overloaded-tests/vfwadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwcvt.c b/auto-generated/llvm-overloaded-tests/vfwcvt.c index 6261c8b80..8d3caf6c6 100644 --- a/auto-generated/llvm-overloaded-tests/vfwcvt.c +++ b/auto-generated/llvm-overloaded-tests/vfwcvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwcvt_rtz.c b/auto-generated/llvm-overloaded-tests/vfwcvt_rtz.c index 724772cbb..8418d8198 100644 --- a/auto-generated/llvm-overloaded-tests/vfwcvt_rtz.c +++ b/auto-generated/llvm-overloaded-tests/vfwcvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwmacc.c b/auto-generated/llvm-overloaded-tests/vfwmacc.c index 59767265b..1a21fe6d4 100644 --- a/auto-generated/llvm-overloaded-tests/vfwmacc.c +++ b/auto-generated/llvm-overloaded-tests/vfwmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwmsac.c b/auto-generated/llvm-overloaded-tests/vfwmsac.c index 22f93687c..b10e205b6 100644 --- a/auto-generated/llvm-overloaded-tests/vfwmsac.c +++ b/auto-generated/llvm-overloaded-tests/vfwmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwmul.c b/auto-generated/llvm-overloaded-tests/vfwmul.c index 73a8daa59..e805d2502 100644 --- a/auto-generated/llvm-overloaded-tests/vfwmul.c +++ b/auto-generated/llvm-overloaded-tests/vfwmul.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwnmacc.c b/auto-generated/llvm-overloaded-tests/vfwnmacc.c index 6bfd0daa4..e13627543 100644 --- a/auto-generated/llvm-overloaded-tests/vfwnmacc.c +++ b/auto-generated/llvm-overloaded-tests/vfwnmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwnmsac.c b/auto-generated/llvm-overloaded-tests/vfwnmsac.c index 9dcc36b38..f4a4f6dc2 100644 --- a/auto-generated/llvm-overloaded-tests/vfwnmsac.c +++ b/auto-generated/llvm-overloaded-tests/vfwnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwredosum.c b/auto-generated/llvm-overloaded-tests/vfwredosum.c index a60d91752..ca145ab4f 100644 --- a/auto-generated/llvm-overloaded-tests/vfwredosum.c +++ b/auto-generated/llvm-overloaded-tests/vfwredosum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwredusum.c b/auto-generated/llvm-overloaded-tests/vfwredusum.c index e240301fe..0a6a05e62 100644 --- a/auto-generated/llvm-overloaded-tests/vfwredusum.c +++ b/auto-generated/llvm-overloaded-tests/vfwredusum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vfwsub.c b/auto-generated/llvm-overloaded-tests/vfwsub.c index ad9c26a53..24905be84 100644 --- a/auto-generated/llvm-overloaded-tests/vfwsub.c +++ b/auto-generated/llvm-overloaded-tests/vfwsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vget.c b/auto-generated/llvm-overloaded-tests/vget.c index 39c7d99bc..b09d9cdb5 100644 --- a/auto-generated/llvm-overloaded-tests/vget.c +++ b/auto-generated/llvm-overloaded-tests/vget.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vle16.c b/auto-generated/llvm-overloaded-tests/vle16.c index 749ac3592..9d00e5441 100644 --- a/auto-generated/llvm-overloaded-tests/vle16.c +++ b/auto-generated/llvm-overloaded-tests/vle16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vle16ff.c b/auto-generated/llvm-overloaded-tests/vle16ff.c index 97d95ae5d..be90298f5 100644 --- a/auto-generated/llvm-overloaded-tests/vle16ff.c +++ b/auto-generated/llvm-overloaded-tests/vle16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vle32.c b/auto-generated/llvm-overloaded-tests/vle32.c index bcc9e979b..5d07eea5f 100644 --- a/auto-generated/llvm-overloaded-tests/vle32.c +++ b/auto-generated/llvm-overloaded-tests/vle32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vle32ff.c b/auto-generated/llvm-overloaded-tests/vle32ff.c index 5c27ffa5b..6bcc8618d 100644 --- a/auto-generated/llvm-overloaded-tests/vle32ff.c +++ b/auto-generated/llvm-overloaded-tests/vle32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vle64.c b/auto-generated/llvm-overloaded-tests/vle64.c index fb1a02d62..900898e7a 100644 --- a/auto-generated/llvm-overloaded-tests/vle64.c +++ b/auto-generated/llvm-overloaded-tests/vle64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vle64ff.c b/auto-generated/llvm-overloaded-tests/vle64ff.c index 695f54616..a4eaa25f0 100644 --- a/auto-generated/llvm-overloaded-tests/vle64ff.c +++ b/auto-generated/llvm-overloaded-tests/vle64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vle8.c b/auto-generated/llvm-overloaded-tests/vle8.c index e5f2c66b5..ea2f4f8e0 100644 --- a/auto-generated/llvm-overloaded-tests/vle8.c +++ b/auto-generated/llvm-overloaded-tests/vle8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vle8ff.c b/auto-generated/llvm-overloaded-tests/vle8ff.c index 86a953132..d0b7e9032 100644 --- a/auto-generated/llvm-overloaded-tests/vle8ff.c +++ b/auto-generated/llvm-overloaded-tests/vle8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlmul_ext_v.c b/auto-generated/llvm-overloaded-tests/vlmul_ext_v.c index ecee0f39f..65899d290 100644 --- a/auto-generated/llvm-overloaded-tests/vlmul_ext_v.c +++ b/auto-generated/llvm-overloaded-tests/vlmul_ext_v.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlmul_trunc_v.c b/auto-generated/llvm-overloaded-tests/vlmul_trunc_v.c index 519de65a8..2251e7a6d 100644 --- a/auto-generated/llvm-overloaded-tests/vlmul_trunc_v.c +++ b/auto-generated/llvm-overloaded-tests/vlmul_trunc_v.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxei16.c b/auto-generated/llvm-overloaded-tests/vloxei16.c index 1081d36ec..25482aaa8 100644 --- a/auto-generated/llvm-overloaded-tests/vloxei16.c +++ b/auto-generated/llvm-overloaded-tests/vloxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxei32.c b/auto-generated/llvm-overloaded-tests/vloxei32.c index 39c688d8d..d3d930162 100644 --- a/auto-generated/llvm-overloaded-tests/vloxei32.c +++ b/auto-generated/llvm-overloaded-tests/vloxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxei64.c b/auto-generated/llvm-overloaded-tests/vloxei64.c index 613d9e2fb..a695ebdb6 100644 --- a/auto-generated/llvm-overloaded-tests/vloxei64.c +++ b/auto-generated/llvm-overloaded-tests/vloxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxei8.c b/auto-generated/llvm-overloaded-tests/vloxei8.c index e7d0e59f6..6606b8e7c 100644 --- a/auto-generated/llvm-overloaded-tests/vloxei8.c +++ b/auto-generated/llvm-overloaded-tests/vloxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/llvm-overloaded-tests/vloxseg2ei16.c index 269cf114c..a3f4b6898 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg2ei16.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg2ei32.c b/auto-generated/llvm-overloaded-tests/vloxseg2ei32.c index 3f81d1e2a..0f5977236 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg2ei32.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg2ei64.c b/auto-generated/llvm-overloaded-tests/vloxseg2ei64.c index 0486dd996..fda124408 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg2ei64.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg2ei8.c b/auto-generated/llvm-overloaded-tests/vloxseg2ei8.c index 1fc95c7b4..f812e7c4d 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg2ei8.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/llvm-overloaded-tests/vloxseg3ei16.c index 0567fb628..e404894f4 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg3ei16.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg3ei32.c b/auto-generated/llvm-overloaded-tests/vloxseg3ei32.c index 2b2bd958c..699c6ff47 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg3ei32.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg3ei64.c b/auto-generated/llvm-overloaded-tests/vloxseg3ei64.c index 1bdd17d9f..886e2f7b5 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg3ei64.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg3ei8.c b/auto-generated/llvm-overloaded-tests/vloxseg3ei8.c index 73138f3ea..9b148013b 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg3ei8.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/llvm-overloaded-tests/vloxseg4ei16.c index 565458e86..cbbb01671 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg4ei16.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg4ei32.c b/auto-generated/llvm-overloaded-tests/vloxseg4ei32.c index 4b0c40715..0264d7fd0 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg4ei32.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg4ei64.c b/auto-generated/llvm-overloaded-tests/vloxseg4ei64.c index 9c12a3a6f..68b9cd4b0 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg4ei64.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg4ei8.c b/auto-generated/llvm-overloaded-tests/vloxseg4ei8.c index 1a5bb47db..4ff062ba4 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg4ei8.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/llvm-overloaded-tests/vloxseg5ei16.c index e091e5395..b8e44ad3d 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg5ei16.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg5ei32.c b/auto-generated/llvm-overloaded-tests/vloxseg5ei32.c index b50071419..e25b0332f 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg5ei32.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg5ei64.c b/auto-generated/llvm-overloaded-tests/vloxseg5ei64.c index a96044aa7..317bf4900 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg5ei64.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg5ei8.c b/auto-generated/llvm-overloaded-tests/vloxseg5ei8.c index b6a05a497..05e93d174 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg5ei8.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/llvm-overloaded-tests/vloxseg6ei16.c index 0adc9a46e..e494ea87b 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg6ei16.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg6ei32.c b/auto-generated/llvm-overloaded-tests/vloxseg6ei32.c index cf319c152..6642bb7a6 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg6ei32.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg6ei64.c b/auto-generated/llvm-overloaded-tests/vloxseg6ei64.c index d5754ba87..2e6b92e32 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg6ei64.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg6ei8.c b/auto-generated/llvm-overloaded-tests/vloxseg6ei8.c index f6220dad8..1405ffc0a 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg6ei8.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/llvm-overloaded-tests/vloxseg7ei16.c index 8c1812cec..bf398d50e 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg7ei16.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg7ei32.c b/auto-generated/llvm-overloaded-tests/vloxseg7ei32.c index 2bfa93753..dcb2cd39b 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg7ei32.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg7ei64.c b/auto-generated/llvm-overloaded-tests/vloxseg7ei64.c index 30dbe99c8..406e28477 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg7ei64.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg7ei8.c b/auto-generated/llvm-overloaded-tests/vloxseg7ei8.c index dfc575450..1c5430f14 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg7ei8.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/llvm-overloaded-tests/vloxseg8ei16.c index daf118310..94377c5cd 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg8ei16.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg8ei32.c b/auto-generated/llvm-overloaded-tests/vloxseg8ei32.c index cb33c2dd9..19e08c9d7 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg8ei32.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg8ei64.c b/auto-generated/llvm-overloaded-tests/vloxseg8ei64.c index 58710c1e7..36d2f8980 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg8ei64.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vloxseg8ei8.c b/auto-generated/llvm-overloaded-tests/vloxseg8ei8.c index a32bb0321..64a994622 100644 --- a/auto-generated/llvm-overloaded-tests/vloxseg8ei8.c +++ b/auto-generated/llvm-overloaded-tests/vloxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlse16.c b/auto-generated/llvm-overloaded-tests/vlse16.c index 5f305b78e..b7a309d50 100644 --- a/auto-generated/llvm-overloaded-tests/vlse16.c +++ b/auto-generated/llvm-overloaded-tests/vlse16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlse32.c b/auto-generated/llvm-overloaded-tests/vlse32.c index 649ac2a10..0c8559e42 100644 --- a/auto-generated/llvm-overloaded-tests/vlse32.c +++ b/auto-generated/llvm-overloaded-tests/vlse32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlse64.c b/auto-generated/llvm-overloaded-tests/vlse64.c index fa52c7617..74a3a1b5f 100644 --- a/auto-generated/llvm-overloaded-tests/vlse64.c +++ b/auto-generated/llvm-overloaded-tests/vlse64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/llvm-overloaded-tests/vlseg2e16.c index 08d33e72b..43123b4f2 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg2e16.c +++ b/auto-generated/llvm-overloaded-tests/vlseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/llvm-overloaded-tests/vlseg2e16ff.c index dc734dda3..dbcdef126 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg2e16ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg2e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg2e32.c b/auto-generated/llvm-overloaded-tests/vlseg2e32.c index 550451551..3fad21873 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg2e32.c +++ b/auto-generated/llvm-overloaded-tests/vlseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg2e32ff.c b/auto-generated/llvm-overloaded-tests/vlseg2e32ff.c index 63bd16be5..51bca582d 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg2e32ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg2e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg2e64.c b/auto-generated/llvm-overloaded-tests/vlseg2e64.c index 820c36613..65ba4c615 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg2e64.c +++ b/auto-generated/llvm-overloaded-tests/vlseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg2e64ff.c b/auto-generated/llvm-overloaded-tests/vlseg2e64ff.c index 4ac04e34c..0605a6c70 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg2e64ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg2e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg2e8ff.c b/auto-generated/llvm-overloaded-tests/vlseg2e8ff.c index 77bcc473b..b459a944e 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg2e8ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg2e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/llvm-overloaded-tests/vlseg3e16.c index 7623eeda1..0dac533e0 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg3e16.c +++ b/auto-generated/llvm-overloaded-tests/vlseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/llvm-overloaded-tests/vlseg3e16ff.c index 1055cc103..ff00cca4e 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg3e16ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg3e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg3e32.c b/auto-generated/llvm-overloaded-tests/vlseg3e32.c index 7c9b9b7e8..46929a58a 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg3e32.c +++ b/auto-generated/llvm-overloaded-tests/vlseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg3e32ff.c b/auto-generated/llvm-overloaded-tests/vlseg3e32ff.c index 0c810add3..521c89276 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg3e32ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg3e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg3e64.c b/auto-generated/llvm-overloaded-tests/vlseg3e64.c index ce7f3e949..b7bea9f84 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg3e64.c +++ b/auto-generated/llvm-overloaded-tests/vlseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg3e64ff.c b/auto-generated/llvm-overloaded-tests/vlseg3e64ff.c index 3d1512f96..76d58e6ce 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg3e64ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg3e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg3e8ff.c b/auto-generated/llvm-overloaded-tests/vlseg3e8ff.c index 1d55cb143..d803c144d 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg3e8ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg3e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/llvm-overloaded-tests/vlseg4e16.c index 18cfd8f25..b56d513df 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg4e16.c +++ b/auto-generated/llvm-overloaded-tests/vlseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/llvm-overloaded-tests/vlseg4e16ff.c index 5245a07aa..5c5ecb284 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg4e16ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg4e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg4e32.c b/auto-generated/llvm-overloaded-tests/vlseg4e32.c index 680d9c27d..abca43619 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg4e32.c +++ b/auto-generated/llvm-overloaded-tests/vlseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg4e32ff.c b/auto-generated/llvm-overloaded-tests/vlseg4e32ff.c index 59ca02f08..2d60d68e2 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg4e32ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg4e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg4e64.c b/auto-generated/llvm-overloaded-tests/vlseg4e64.c index db96f64c5..36c73f025 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg4e64.c +++ b/auto-generated/llvm-overloaded-tests/vlseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg4e64ff.c b/auto-generated/llvm-overloaded-tests/vlseg4e64ff.c index 5d4d6a334..0a3e259c4 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg4e64ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg4e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg4e8ff.c b/auto-generated/llvm-overloaded-tests/vlseg4e8ff.c index aaaf44274..afe2a86c4 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg4e8ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg4e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/llvm-overloaded-tests/vlseg5e16.c index db59693df..cf12ab878 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg5e16.c +++ b/auto-generated/llvm-overloaded-tests/vlseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/llvm-overloaded-tests/vlseg5e16ff.c index eac27f077..0eb2db171 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg5e16ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg5e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg5e32.c b/auto-generated/llvm-overloaded-tests/vlseg5e32.c index 8a6edc9d9..ac6f6f7ac 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg5e32.c +++ b/auto-generated/llvm-overloaded-tests/vlseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg5e32ff.c b/auto-generated/llvm-overloaded-tests/vlseg5e32ff.c index 6a108e153..6611bd5e5 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg5e32ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg5e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg5e64.c b/auto-generated/llvm-overloaded-tests/vlseg5e64.c index 7585dbbd4..143bace06 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg5e64.c +++ b/auto-generated/llvm-overloaded-tests/vlseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg5e64ff.c b/auto-generated/llvm-overloaded-tests/vlseg5e64ff.c index b676743c8..f9d4b927a 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg5e64ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg5e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg5e8ff.c b/auto-generated/llvm-overloaded-tests/vlseg5e8ff.c index 2cb9d9558..02dd1893e 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg5e8ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg5e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/llvm-overloaded-tests/vlseg6e16.c index c92c7156c..455974c84 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg6e16.c +++ b/auto-generated/llvm-overloaded-tests/vlseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/llvm-overloaded-tests/vlseg6e16ff.c index d1b5df359..4d8a1967f 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg6e16ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg6e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg6e32.c b/auto-generated/llvm-overloaded-tests/vlseg6e32.c index 91e480a9a..61165e5c0 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg6e32.c +++ b/auto-generated/llvm-overloaded-tests/vlseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg6e32ff.c b/auto-generated/llvm-overloaded-tests/vlseg6e32ff.c index 7409919a1..2f8f442da 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg6e32ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg6e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg6e64.c b/auto-generated/llvm-overloaded-tests/vlseg6e64.c index 1784d858c..ea6cd972d 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg6e64.c +++ b/auto-generated/llvm-overloaded-tests/vlseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg6e64ff.c b/auto-generated/llvm-overloaded-tests/vlseg6e64ff.c index d60289dc6..1fe49ef31 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg6e64ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg6e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg6e8ff.c b/auto-generated/llvm-overloaded-tests/vlseg6e8ff.c index 6551b8831..4b507acfc 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg6e8ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg6e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/llvm-overloaded-tests/vlseg7e16.c index 90ed1604f..f24c74d6a 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg7e16.c +++ b/auto-generated/llvm-overloaded-tests/vlseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/llvm-overloaded-tests/vlseg7e16ff.c index 0d184ce4e..f6181cfc8 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg7e16ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg7e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg7e32.c b/auto-generated/llvm-overloaded-tests/vlseg7e32.c index 2f49b46e1..7f98184fd 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg7e32.c +++ b/auto-generated/llvm-overloaded-tests/vlseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg7e32ff.c b/auto-generated/llvm-overloaded-tests/vlseg7e32ff.c index 8bcc421de..dcc55e78d 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg7e32ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg7e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg7e64.c b/auto-generated/llvm-overloaded-tests/vlseg7e64.c index 7f47d8751..1b94baa05 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg7e64.c +++ b/auto-generated/llvm-overloaded-tests/vlseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg7e64ff.c b/auto-generated/llvm-overloaded-tests/vlseg7e64ff.c index b137dea51..4bd127e77 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg7e64ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg7e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg7e8ff.c b/auto-generated/llvm-overloaded-tests/vlseg7e8ff.c index 6ea930980..a348746c2 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg7e8ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg7e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/llvm-overloaded-tests/vlseg8e16.c index c495ac5fa..b59d41c2d 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg8e16.c +++ b/auto-generated/llvm-overloaded-tests/vlseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/llvm-overloaded-tests/vlseg8e16ff.c index e3a3021f2..59b0d1d44 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg8e16ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg8e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg8e32.c b/auto-generated/llvm-overloaded-tests/vlseg8e32.c index 259117dbc..2c2395d8d 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg8e32.c +++ b/auto-generated/llvm-overloaded-tests/vlseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg8e32ff.c b/auto-generated/llvm-overloaded-tests/vlseg8e32ff.c index af4cf6605..d21fe3ad9 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg8e32ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg8e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg8e64.c b/auto-generated/llvm-overloaded-tests/vlseg8e64.c index 7a4029725..c0531927d 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg8e64.c +++ b/auto-generated/llvm-overloaded-tests/vlseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg8e64ff.c b/auto-generated/llvm-overloaded-tests/vlseg8e64ff.c index 53623df1e..0726b8a57 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg8e64ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg8e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlseg8e8ff.c b/auto-generated/llvm-overloaded-tests/vlseg8e8ff.c index 1a1b6f5c1..544085569 100644 --- a/auto-generated/llvm-overloaded-tests/vlseg8e8ff.c +++ b/auto-generated/llvm-overloaded-tests/vlseg8e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/llvm-overloaded-tests/vlsseg2e16.c index 480ec7aaa..c98966d4e 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg2e16.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg2e32.c b/auto-generated/llvm-overloaded-tests/vlsseg2e32.c index e8081e9aa..a175490f9 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg2e32.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg2e64.c b/auto-generated/llvm-overloaded-tests/vlsseg2e64.c index ed62c1a50..626ec658f 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg2e64.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/llvm-overloaded-tests/vlsseg3e16.c index 267d870f7..3132ba71e 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg3e16.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg3e32.c b/auto-generated/llvm-overloaded-tests/vlsseg3e32.c index b48a51834..8104bddb5 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg3e32.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg3e64.c b/auto-generated/llvm-overloaded-tests/vlsseg3e64.c index ce39382ce..ebeccd31f 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg3e64.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/llvm-overloaded-tests/vlsseg4e16.c index c39a10cea..1bd9ebd9e 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg4e16.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg4e32.c b/auto-generated/llvm-overloaded-tests/vlsseg4e32.c index 796fb6633..a1591a990 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg4e32.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg4e64.c b/auto-generated/llvm-overloaded-tests/vlsseg4e64.c index d45a5087e..e1bc84938 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg4e64.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/llvm-overloaded-tests/vlsseg5e16.c index d987c30f9..1b94b0253 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg5e16.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg5e32.c b/auto-generated/llvm-overloaded-tests/vlsseg5e32.c index bf2fb2111..35a2d1dfc 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg5e32.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg5e64.c b/auto-generated/llvm-overloaded-tests/vlsseg5e64.c index eea6e955c..ae51a236b 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg5e64.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/llvm-overloaded-tests/vlsseg6e16.c index 69707f49d..ce9275b52 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg6e16.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg6e32.c b/auto-generated/llvm-overloaded-tests/vlsseg6e32.c index ee56b0c69..f64adb52e 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg6e32.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg6e64.c b/auto-generated/llvm-overloaded-tests/vlsseg6e64.c index f45817a6f..1c70f0ea8 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg6e64.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/llvm-overloaded-tests/vlsseg7e16.c index a9bbd8deb..78de01593 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg7e16.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg7e32.c b/auto-generated/llvm-overloaded-tests/vlsseg7e32.c index ce9f7b793..2ccf948de 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg7e32.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg7e64.c b/auto-generated/llvm-overloaded-tests/vlsseg7e64.c index 2fcaa0d32..0e5fbb332 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg7e64.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/llvm-overloaded-tests/vlsseg8e16.c index e53bef3e4..e21eb3972 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg8e16.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg8e32.c b/auto-generated/llvm-overloaded-tests/vlsseg8e32.c index 3e1747877..dc26be3ff 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg8e32.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vlsseg8e64.c b/auto-generated/llvm-overloaded-tests/vlsseg8e64.c index ffccd481d..5a5dcbfde 100644 --- a/auto-generated/llvm-overloaded-tests/vlsseg8e64.c +++ b/auto-generated/llvm-overloaded-tests/vlsseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxei16.c b/auto-generated/llvm-overloaded-tests/vluxei16.c index d1f5eb350..ef2e6c7ef 100644 --- a/auto-generated/llvm-overloaded-tests/vluxei16.c +++ b/auto-generated/llvm-overloaded-tests/vluxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxei32.c b/auto-generated/llvm-overloaded-tests/vluxei32.c index 26bb8000a..892232733 100644 --- a/auto-generated/llvm-overloaded-tests/vluxei32.c +++ b/auto-generated/llvm-overloaded-tests/vluxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxei64.c b/auto-generated/llvm-overloaded-tests/vluxei64.c index 6b2444226..9ea420314 100644 --- a/auto-generated/llvm-overloaded-tests/vluxei64.c +++ b/auto-generated/llvm-overloaded-tests/vluxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxei8.c b/auto-generated/llvm-overloaded-tests/vluxei8.c index 37bd04886..86e9b665c 100644 --- a/auto-generated/llvm-overloaded-tests/vluxei8.c +++ b/auto-generated/llvm-overloaded-tests/vluxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/llvm-overloaded-tests/vluxseg2ei16.c index 3d4105e8f..25c094c8c 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg2ei16.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg2ei32.c b/auto-generated/llvm-overloaded-tests/vluxseg2ei32.c index 1450f1853..80b37499d 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg2ei32.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg2ei64.c b/auto-generated/llvm-overloaded-tests/vluxseg2ei64.c index 973845b72..d7cb8d1be 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg2ei64.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg2ei8.c b/auto-generated/llvm-overloaded-tests/vluxseg2ei8.c index 412eb2629..7ad2fe17a 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg2ei8.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/llvm-overloaded-tests/vluxseg3ei16.c index ed396d770..91921f422 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg3ei16.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg3ei32.c b/auto-generated/llvm-overloaded-tests/vluxseg3ei32.c index e8da081a4..ba7e54483 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg3ei32.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg3ei64.c b/auto-generated/llvm-overloaded-tests/vluxseg3ei64.c index 4efeaa592..c130f3146 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg3ei64.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg3ei8.c b/auto-generated/llvm-overloaded-tests/vluxseg3ei8.c index 5da504730..76b95b448 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg3ei8.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/llvm-overloaded-tests/vluxseg4ei16.c index fb79d6588..5bf294cfa 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg4ei16.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg4ei32.c b/auto-generated/llvm-overloaded-tests/vluxseg4ei32.c index 2c1b1d84f..16af2fa9c 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg4ei32.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg4ei64.c b/auto-generated/llvm-overloaded-tests/vluxseg4ei64.c index 5d0022ce6..c6a7eba92 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg4ei64.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg4ei8.c b/auto-generated/llvm-overloaded-tests/vluxseg4ei8.c index 142be874e..9f8aeb0cd 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg4ei8.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/llvm-overloaded-tests/vluxseg5ei16.c index 346bb6983..7b9845c2d 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg5ei16.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg5ei32.c b/auto-generated/llvm-overloaded-tests/vluxseg5ei32.c index 381855531..c1a79dc4f 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg5ei32.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg5ei64.c b/auto-generated/llvm-overloaded-tests/vluxseg5ei64.c index 49924b654..8eaad23f9 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg5ei64.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg5ei8.c b/auto-generated/llvm-overloaded-tests/vluxseg5ei8.c index 944ec0c73..72ca29b49 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg5ei8.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/llvm-overloaded-tests/vluxseg6ei16.c index 4582e9ba2..710fe0197 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg6ei16.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg6ei32.c b/auto-generated/llvm-overloaded-tests/vluxseg6ei32.c index 052e6ca7d..df279b4c3 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg6ei32.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg6ei64.c b/auto-generated/llvm-overloaded-tests/vluxseg6ei64.c index 1a4c10d0c..75c47941a 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg6ei64.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg6ei8.c b/auto-generated/llvm-overloaded-tests/vluxseg6ei8.c index a8f6d44ca..10446a70a 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg6ei8.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/llvm-overloaded-tests/vluxseg7ei16.c index 09303d07e..251e1bcc5 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg7ei16.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg7ei32.c b/auto-generated/llvm-overloaded-tests/vluxseg7ei32.c index 9a0b3dd30..41a17c04a 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg7ei32.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg7ei64.c b/auto-generated/llvm-overloaded-tests/vluxseg7ei64.c index 4c9898411..b4badc8a3 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg7ei64.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg7ei8.c b/auto-generated/llvm-overloaded-tests/vluxseg7ei8.c index 17d9da73d..18ed76397 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg7ei8.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/llvm-overloaded-tests/vluxseg8ei16.c index b61c5f247..e9424f533 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg8ei16.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg8ei32.c b/auto-generated/llvm-overloaded-tests/vluxseg8ei32.c index 7041719e5..79a13a75b 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg8ei32.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg8ei64.c b/auto-generated/llvm-overloaded-tests/vluxseg8ei64.c index 5e5a19830..5a645c41c 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg8ei64.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vluxseg8ei8.c b/auto-generated/llvm-overloaded-tests/vluxseg8ei8.c index e7658139c..a3734f5b6 100644 --- a/auto-generated/llvm-overloaded-tests/vluxseg8ei8.c +++ b/auto-generated/llvm-overloaded-tests/vluxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmacc.c b/auto-generated/llvm-overloaded-tests/vmacc.c index 01c72cba5..42fc710a9 100644 --- a/auto-generated/llvm-overloaded-tests/vmacc.c +++ b/auto-generated/llvm-overloaded-tests/vmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmadd.c b/auto-generated/llvm-overloaded-tests/vmadd.c index c046afb41..189fc2d0e 100644 --- a/auto-generated/llvm-overloaded-tests/vmadd.c +++ b/auto-generated/llvm-overloaded-tests/vmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmerge.c b/auto-generated/llvm-overloaded-tests/vmerge.c index b97674761..badd42e62 100644 --- a/auto-generated/llvm-overloaded-tests/vmerge.c +++ b/auto-generated/llvm-overloaded-tests/vmerge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmfeq.c b/auto-generated/llvm-overloaded-tests/vmfeq.c index 56b8c60c8..da2cd8c16 100644 --- a/auto-generated/llvm-overloaded-tests/vmfeq.c +++ b/auto-generated/llvm-overloaded-tests/vmfeq.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmfge.c b/auto-generated/llvm-overloaded-tests/vmfge.c index 6527dbd1d..92ca4dd06 100644 --- a/auto-generated/llvm-overloaded-tests/vmfge.c +++ b/auto-generated/llvm-overloaded-tests/vmfge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmfgt.c b/auto-generated/llvm-overloaded-tests/vmfgt.c index 3ac8d22fa..952021cb2 100644 --- a/auto-generated/llvm-overloaded-tests/vmfgt.c +++ b/auto-generated/llvm-overloaded-tests/vmfgt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmfle.c b/auto-generated/llvm-overloaded-tests/vmfle.c index 489b0fdcc..8e53ffafc 100644 --- a/auto-generated/llvm-overloaded-tests/vmfle.c +++ b/auto-generated/llvm-overloaded-tests/vmfle.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmflt.c b/auto-generated/llvm-overloaded-tests/vmflt.c index b1712ce2b..97eb8937c 100644 --- a/auto-generated/llvm-overloaded-tests/vmflt.c +++ b/auto-generated/llvm-overloaded-tests/vmflt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmfne.c b/auto-generated/llvm-overloaded-tests/vmfne.c index df921da2e..f465c2593 100644 --- a/auto-generated/llvm-overloaded-tests/vmfne.c +++ b/auto-generated/llvm-overloaded-tests/vmfne.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmmv.c b/auto-generated/llvm-overloaded-tests/vmmv.c index 65832b567..f7de0202f 100644 --- a/auto-generated/llvm-overloaded-tests/vmmv.c +++ b/auto-generated/llvm-overloaded-tests/vmmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmseq.c b/auto-generated/llvm-overloaded-tests/vmseq.c index fecfed872..fb4ed06db 100644 --- a/auto-generated/llvm-overloaded-tests/vmseq.c +++ b/auto-generated/llvm-overloaded-tests/vmseq.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmsge.c b/auto-generated/llvm-overloaded-tests/vmsge.c index c70c8efbe..09e8d0def 100644 --- a/auto-generated/llvm-overloaded-tests/vmsge.c +++ b/auto-generated/llvm-overloaded-tests/vmsge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmsgeu.c b/auto-generated/llvm-overloaded-tests/vmsgeu.c index 82fc1ceb7..8ca98dcc4 100644 --- a/auto-generated/llvm-overloaded-tests/vmsgeu.c +++ b/auto-generated/llvm-overloaded-tests/vmsgeu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmsgt.c b/auto-generated/llvm-overloaded-tests/vmsgt.c index 72fe47564..df04a5454 100644 --- a/auto-generated/llvm-overloaded-tests/vmsgt.c +++ b/auto-generated/llvm-overloaded-tests/vmsgt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmsgtu.c b/auto-generated/llvm-overloaded-tests/vmsgtu.c index d0a01b291..2377bb157 100644 --- a/auto-generated/llvm-overloaded-tests/vmsgtu.c +++ b/auto-generated/llvm-overloaded-tests/vmsgtu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmsle.c b/auto-generated/llvm-overloaded-tests/vmsle.c index 8b2b35c25..b65691e19 100644 --- a/auto-generated/llvm-overloaded-tests/vmsle.c +++ b/auto-generated/llvm-overloaded-tests/vmsle.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmsleu.c b/auto-generated/llvm-overloaded-tests/vmsleu.c index 426d8e66d..e6a1058c6 100644 --- a/auto-generated/llvm-overloaded-tests/vmsleu.c +++ b/auto-generated/llvm-overloaded-tests/vmsleu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmslt.c b/auto-generated/llvm-overloaded-tests/vmslt.c index 86d1df4d3..45618f3ff 100644 --- a/auto-generated/llvm-overloaded-tests/vmslt.c +++ b/auto-generated/llvm-overloaded-tests/vmslt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmsltu.c b/auto-generated/llvm-overloaded-tests/vmsltu.c index 9c64d831a..f275dd3f1 100644 --- a/auto-generated/llvm-overloaded-tests/vmsltu.c +++ b/auto-generated/llvm-overloaded-tests/vmsltu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmsne.c b/auto-generated/llvm-overloaded-tests/vmsne.c index 5e85996e3..f0e5507bc 100644 --- a/auto-generated/llvm-overloaded-tests/vmsne.c +++ b/auto-generated/llvm-overloaded-tests/vmsne.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vmv.c b/auto-generated/llvm-overloaded-tests/vmv.c index 4ce114b1f..338dfbe63 100644 --- a/auto-generated/llvm-overloaded-tests/vmv.c +++ b/auto-generated/llvm-overloaded-tests/vmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vneg.c b/auto-generated/llvm-overloaded-tests/vneg.c index 15a05dc21..cd484c7b7 100644 --- a/auto-generated/llvm-overloaded-tests/vneg.c +++ b/auto-generated/llvm-overloaded-tests/vneg.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vnmsac.c b/auto-generated/llvm-overloaded-tests/vnmsac.c index 8caf88ee3..100ca6ba3 100644 --- a/auto-generated/llvm-overloaded-tests/vnmsac.c +++ b/auto-generated/llvm-overloaded-tests/vnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vnmsub.c b/auto-generated/llvm-overloaded-tests/vnmsub.c index 26b6b4356..1b620b960 100644 --- a/auto-generated/llvm-overloaded-tests/vnmsub.c +++ b/auto-generated/llvm-overloaded-tests/vnmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vreinterpret.c b/auto-generated/llvm-overloaded-tests/vreinterpret.c index 6e1139852..28508e4ee 100644 --- a/auto-generated/llvm-overloaded-tests/vreinterpret.c +++ b/auto-generated/llvm-overloaded-tests/vreinterpret.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vrgather.c b/auto-generated/llvm-overloaded-tests/vrgather.c index 204aa45e3..adc9e774f 100644 --- a/auto-generated/llvm-overloaded-tests/vrgather.c +++ b/auto-generated/llvm-overloaded-tests/vrgather.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vrgatherei16.c b/auto-generated/llvm-overloaded-tests/vrgatherei16.c index 82f5c81cf..1ff1f2ce2 100644 --- a/auto-generated/llvm-overloaded-tests/vrgatherei16.c +++ b/auto-generated/llvm-overloaded-tests/vrgatherei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vse16.c b/auto-generated/llvm-overloaded-tests/vse16.c index 78bf1da3a..d13ce1acd 100644 --- a/auto-generated/llvm-overloaded-tests/vse16.c +++ b/auto-generated/llvm-overloaded-tests/vse16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vse32.c b/auto-generated/llvm-overloaded-tests/vse32.c index cff822b03..05c2817e8 100644 --- a/auto-generated/llvm-overloaded-tests/vse32.c +++ b/auto-generated/llvm-overloaded-tests/vse32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vse64.c b/auto-generated/llvm-overloaded-tests/vse64.c index 9e642087c..1c5352f6f 100644 --- a/auto-generated/llvm-overloaded-tests/vse64.c +++ b/auto-generated/llvm-overloaded-tests/vse64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vset.c b/auto-generated/llvm-overloaded-tests/vset.c index 31f8a26ee..f9f89f98f 100644 --- a/auto-generated/llvm-overloaded-tests/vset.c +++ b/auto-generated/llvm-overloaded-tests/vset.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vslidedown.c b/auto-generated/llvm-overloaded-tests/vslidedown.c index 94553793c..6b1fca8f9 100644 --- a/auto-generated/llvm-overloaded-tests/vslidedown.c +++ b/auto-generated/llvm-overloaded-tests/vslidedown.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vslideup.c b/auto-generated/llvm-overloaded-tests/vslideup.c index 62db73b80..cad4d9a1d 100644 --- a/auto-generated/llvm-overloaded-tests/vslideup.c +++ b/auto-generated/llvm-overloaded-tests/vslideup.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxei16.c b/auto-generated/llvm-overloaded-tests/vsoxei16.c index 9f09dfa95..4876f9fd0 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxei16.c +++ b/auto-generated/llvm-overloaded-tests/vsoxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxei32.c b/auto-generated/llvm-overloaded-tests/vsoxei32.c index 3866e2bc5..5ddd1c98e 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxei32.c +++ b/auto-generated/llvm-overloaded-tests/vsoxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxei64.c b/auto-generated/llvm-overloaded-tests/vsoxei64.c index 4bba2f0d8..0c004590c 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxei64.c +++ b/auto-generated/llvm-overloaded-tests/vsoxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxei8.c b/auto-generated/llvm-overloaded-tests/vsoxei8.c index 74580c22b..234b660c2 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxei8.c +++ b/auto-generated/llvm-overloaded-tests/vsoxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg2ei16.c b/auto-generated/llvm-overloaded-tests/vsoxseg2ei16.c index 42af3003a..25f049a04 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg2ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg2ei32.c b/auto-generated/llvm-overloaded-tests/vsoxseg2ei32.c index 7f37a507e..1bd36e728 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg2ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg2ei64.c b/auto-generated/llvm-overloaded-tests/vsoxseg2ei64.c index 5412ce0b6..debd8d5a6 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg2ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg2ei8.c b/auto-generated/llvm-overloaded-tests/vsoxseg2ei8.c index 6013758b8..d1879dbf5 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg2ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg3ei16.c b/auto-generated/llvm-overloaded-tests/vsoxseg3ei16.c index 4c23fa58b..ac68cd348 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg3ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg3ei32.c b/auto-generated/llvm-overloaded-tests/vsoxseg3ei32.c index 5cec2c83f..b1d6d25d9 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg3ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg3ei64.c b/auto-generated/llvm-overloaded-tests/vsoxseg3ei64.c index bfd5535cb..c101dbb86 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg3ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg3ei8.c b/auto-generated/llvm-overloaded-tests/vsoxseg3ei8.c index 7fbd644ea..416c27036 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg3ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg4ei16.c b/auto-generated/llvm-overloaded-tests/vsoxseg4ei16.c index c1ea561b7..625d3a86a 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg4ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg4ei32.c b/auto-generated/llvm-overloaded-tests/vsoxseg4ei32.c index abb861004..b888f4602 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg4ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg4ei64.c b/auto-generated/llvm-overloaded-tests/vsoxseg4ei64.c index 43760bdd1..3baaa140f 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg4ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg4ei8.c b/auto-generated/llvm-overloaded-tests/vsoxseg4ei8.c index c608823e1..b1a9ca162 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg4ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg5ei16.c b/auto-generated/llvm-overloaded-tests/vsoxseg5ei16.c index 1dde607a8..eddf62310 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg5ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg5ei32.c b/auto-generated/llvm-overloaded-tests/vsoxseg5ei32.c index a5011ba4a..16990807e 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg5ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg5ei64.c b/auto-generated/llvm-overloaded-tests/vsoxseg5ei64.c index 90d810b61..3d3a1f0dc 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg5ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg5ei8.c b/auto-generated/llvm-overloaded-tests/vsoxseg5ei8.c index 741819e1d..d45a0124e 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg5ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg6ei16.c b/auto-generated/llvm-overloaded-tests/vsoxseg6ei16.c index e3238a455..a13c73b67 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg6ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg6ei32.c b/auto-generated/llvm-overloaded-tests/vsoxseg6ei32.c index 9c29435bd..ede0dcdcc 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg6ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg6ei64.c b/auto-generated/llvm-overloaded-tests/vsoxseg6ei64.c index e7d194800..aa069e675 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg6ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg6ei8.c b/auto-generated/llvm-overloaded-tests/vsoxseg6ei8.c index 5c11df866..ca9e72b46 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg6ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg7ei16.c b/auto-generated/llvm-overloaded-tests/vsoxseg7ei16.c index f3cb2089c..a9c01330e 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg7ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg7ei32.c b/auto-generated/llvm-overloaded-tests/vsoxseg7ei32.c index f69cf20c4..8f60967ec 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg7ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg7ei64.c b/auto-generated/llvm-overloaded-tests/vsoxseg7ei64.c index b8bcd92d3..160de6109 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg7ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg7ei8.c b/auto-generated/llvm-overloaded-tests/vsoxseg7ei8.c index c7b8d3388..ddb4426d5 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg7ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg8ei16.c b/auto-generated/llvm-overloaded-tests/vsoxseg8ei16.c index 0e3e8cedb..7c9249261 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg8ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg8ei32.c b/auto-generated/llvm-overloaded-tests/vsoxseg8ei32.c index d9a1fe42e..380022c04 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg8ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg8ei64.c b/auto-generated/llvm-overloaded-tests/vsoxseg8ei64.c index 08190b607..3cbdb37bd 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg8ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsoxseg8ei8.c b/auto-generated/llvm-overloaded-tests/vsoxseg8ei8.c index 3c9fdff15..0b224ff71 100644 --- a/auto-generated/llvm-overloaded-tests/vsoxseg8ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsoxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsse16.c b/auto-generated/llvm-overloaded-tests/vsse16.c index 1cf57e1d5..7f44dc849 100644 --- a/auto-generated/llvm-overloaded-tests/vsse16.c +++ b/auto-generated/llvm-overloaded-tests/vsse16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsse32.c b/auto-generated/llvm-overloaded-tests/vsse32.c index 2475911e3..baeae97ed 100644 --- a/auto-generated/llvm-overloaded-tests/vsse32.c +++ b/auto-generated/llvm-overloaded-tests/vsse32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsse64.c b/auto-generated/llvm-overloaded-tests/vsse64.c index b6aa913f8..e4afe3346 100644 --- a/auto-generated/llvm-overloaded-tests/vsse64.c +++ b/auto-generated/llvm-overloaded-tests/vsse64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg2e16.c b/auto-generated/llvm-overloaded-tests/vsseg2e16.c index 43238cab1..415a55b37 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg2e16.c +++ b/auto-generated/llvm-overloaded-tests/vsseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg2e32.c b/auto-generated/llvm-overloaded-tests/vsseg2e32.c index 5e216dc46..036f11cce 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg2e32.c +++ b/auto-generated/llvm-overloaded-tests/vsseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg2e64.c b/auto-generated/llvm-overloaded-tests/vsseg2e64.c index dcb9ed1f1..35847f988 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg2e64.c +++ b/auto-generated/llvm-overloaded-tests/vsseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg3e16.c b/auto-generated/llvm-overloaded-tests/vsseg3e16.c index 25e0acd41..70e03adb6 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg3e16.c +++ b/auto-generated/llvm-overloaded-tests/vsseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg3e32.c b/auto-generated/llvm-overloaded-tests/vsseg3e32.c index e1dae8c47..f1f6fcc54 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg3e32.c +++ b/auto-generated/llvm-overloaded-tests/vsseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg3e64.c b/auto-generated/llvm-overloaded-tests/vsseg3e64.c index d48c6dc50..a3f33beee 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg3e64.c +++ b/auto-generated/llvm-overloaded-tests/vsseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg4e16.c b/auto-generated/llvm-overloaded-tests/vsseg4e16.c index 6cbaae422..06ec69ba2 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg4e16.c +++ b/auto-generated/llvm-overloaded-tests/vsseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg4e32.c b/auto-generated/llvm-overloaded-tests/vsseg4e32.c index f56835928..fae06b3ee 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg4e32.c +++ b/auto-generated/llvm-overloaded-tests/vsseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg4e64.c b/auto-generated/llvm-overloaded-tests/vsseg4e64.c index 57f43fc22..3dd378731 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg4e64.c +++ b/auto-generated/llvm-overloaded-tests/vsseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg5e16.c b/auto-generated/llvm-overloaded-tests/vsseg5e16.c index cf3ad5532..f3e196ba4 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg5e16.c +++ b/auto-generated/llvm-overloaded-tests/vsseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg5e32.c b/auto-generated/llvm-overloaded-tests/vsseg5e32.c index 956d08e51..1bf384edd 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg5e32.c +++ b/auto-generated/llvm-overloaded-tests/vsseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg5e64.c b/auto-generated/llvm-overloaded-tests/vsseg5e64.c index 2114c8e20..a3a385864 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg5e64.c +++ b/auto-generated/llvm-overloaded-tests/vsseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg6e16.c b/auto-generated/llvm-overloaded-tests/vsseg6e16.c index 6fcd61811..6d42bc4d3 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg6e16.c +++ b/auto-generated/llvm-overloaded-tests/vsseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg6e32.c b/auto-generated/llvm-overloaded-tests/vsseg6e32.c index e6cd0c6a0..ae33fc5de 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg6e32.c +++ b/auto-generated/llvm-overloaded-tests/vsseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg6e64.c b/auto-generated/llvm-overloaded-tests/vsseg6e64.c index aedfabe92..b54daa66a 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg6e64.c +++ b/auto-generated/llvm-overloaded-tests/vsseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg7e16.c b/auto-generated/llvm-overloaded-tests/vsseg7e16.c index ff1f2bed7..ab8bcdc64 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg7e16.c +++ b/auto-generated/llvm-overloaded-tests/vsseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg7e32.c b/auto-generated/llvm-overloaded-tests/vsseg7e32.c index 4de310bad..f618e5326 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg7e32.c +++ b/auto-generated/llvm-overloaded-tests/vsseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg7e64.c b/auto-generated/llvm-overloaded-tests/vsseg7e64.c index 5f3266a8c..195410936 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg7e64.c +++ b/auto-generated/llvm-overloaded-tests/vsseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg8e16.c b/auto-generated/llvm-overloaded-tests/vsseg8e16.c index a41ca55e4..127b78cd1 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg8e16.c +++ b/auto-generated/llvm-overloaded-tests/vsseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg8e32.c b/auto-generated/llvm-overloaded-tests/vsseg8e32.c index 19cf5df8d..ae05e8039 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg8e32.c +++ b/auto-generated/llvm-overloaded-tests/vsseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsseg8e64.c b/auto-generated/llvm-overloaded-tests/vsseg8e64.c index e4bdb7bd6..0380df56d 100644 --- a/auto-generated/llvm-overloaded-tests/vsseg8e64.c +++ b/auto-generated/llvm-overloaded-tests/vsseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg2e16.c b/auto-generated/llvm-overloaded-tests/vssseg2e16.c index e3c10da0d..5b2ccfb87 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg2e16.c +++ b/auto-generated/llvm-overloaded-tests/vssseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg2e32.c b/auto-generated/llvm-overloaded-tests/vssseg2e32.c index 41678e400..3e66b0c61 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg2e32.c +++ b/auto-generated/llvm-overloaded-tests/vssseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg2e64.c b/auto-generated/llvm-overloaded-tests/vssseg2e64.c index 74096a104..03735b1cf 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg2e64.c +++ b/auto-generated/llvm-overloaded-tests/vssseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg3e16.c b/auto-generated/llvm-overloaded-tests/vssseg3e16.c index b8efd3374..0a5e34722 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg3e16.c +++ b/auto-generated/llvm-overloaded-tests/vssseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg3e32.c b/auto-generated/llvm-overloaded-tests/vssseg3e32.c index dbce8bc59..bb8183c83 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg3e32.c +++ b/auto-generated/llvm-overloaded-tests/vssseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg3e64.c b/auto-generated/llvm-overloaded-tests/vssseg3e64.c index 32d4977bc..d481537ab 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg3e64.c +++ b/auto-generated/llvm-overloaded-tests/vssseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg4e16.c b/auto-generated/llvm-overloaded-tests/vssseg4e16.c index 9b4e16416..80d6e21c3 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg4e16.c +++ b/auto-generated/llvm-overloaded-tests/vssseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg4e32.c b/auto-generated/llvm-overloaded-tests/vssseg4e32.c index dc0e89ad1..e6f45c6ba 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg4e32.c +++ b/auto-generated/llvm-overloaded-tests/vssseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg4e64.c b/auto-generated/llvm-overloaded-tests/vssseg4e64.c index 38f19c503..f5644a76f 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg4e64.c +++ b/auto-generated/llvm-overloaded-tests/vssseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg5e16.c b/auto-generated/llvm-overloaded-tests/vssseg5e16.c index 5c8157e6e..ec2e3c51e 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg5e16.c +++ b/auto-generated/llvm-overloaded-tests/vssseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg5e32.c b/auto-generated/llvm-overloaded-tests/vssseg5e32.c index 7f22fbbfd..67db8b2c5 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg5e32.c +++ b/auto-generated/llvm-overloaded-tests/vssseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg5e64.c b/auto-generated/llvm-overloaded-tests/vssseg5e64.c index 789412209..b463bb0bf 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg5e64.c +++ b/auto-generated/llvm-overloaded-tests/vssseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg6e16.c b/auto-generated/llvm-overloaded-tests/vssseg6e16.c index 9a020de8e..129a7bfdd 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg6e16.c +++ b/auto-generated/llvm-overloaded-tests/vssseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg6e32.c b/auto-generated/llvm-overloaded-tests/vssseg6e32.c index 03027fccd..8c40ed263 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg6e32.c +++ b/auto-generated/llvm-overloaded-tests/vssseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg6e64.c b/auto-generated/llvm-overloaded-tests/vssseg6e64.c index 3d009cb04..82b7a75d0 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg6e64.c +++ b/auto-generated/llvm-overloaded-tests/vssseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg7e16.c b/auto-generated/llvm-overloaded-tests/vssseg7e16.c index ce786f3d2..a0035d7fb 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg7e16.c +++ b/auto-generated/llvm-overloaded-tests/vssseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg7e32.c b/auto-generated/llvm-overloaded-tests/vssseg7e32.c index f326e9039..f552d57f7 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg7e32.c +++ b/auto-generated/llvm-overloaded-tests/vssseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg7e64.c b/auto-generated/llvm-overloaded-tests/vssseg7e64.c index 3b805e414..2199614cb 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg7e64.c +++ b/auto-generated/llvm-overloaded-tests/vssseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg8e16.c b/auto-generated/llvm-overloaded-tests/vssseg8e16.c index 656c67d7c..d3dcf76ae 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg8e16.c +++ b/auto-generated/llvm-overloaded-tests/vssseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg8e32.c b/auto-generated/llvm-overloaded-tests/vssseg8e32.c index ed2192598..eb32d1ebb 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg8e32.c +++ b/auto-generated/llvm-overloaded-tests/vssseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vssseg8e64.c b/auto-generated/llvm-overloaded-tests/vssseg8e64.c index f537e5e9b..ab0614ac0 100644 --- a/auto-generated/llvm-overloaded-tests/vssseg8e64.c +++ b/auto-generated/llvm-overloaded-tests/vssseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxei16.c b/auto-generated/llvm-overloaded-tests/vsuxei16.c index 50c969724..ef2ced4ee 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxei16.c +++ b/auto-generated/llvm-overloaded-tests/vsuxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxei32.c b/auto-generated/llvm-overloaded-tests/vsuxei32.c index 4873fd9d2..d5c96c863 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxei32.c +++ b/auto-generated/llvm-overloaded-tests/vsuxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxei64.c b/auto-generated/llvm-overloaded-tests/vsuxei64.c index 7d8101f35..22f00cc16 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxei64.c +++ b/auto-generated/llvm-overloaded-tests/vsuxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxei8.c b/auto-generated/llvm-overloaded-tests/vsuxei8.c index 1f8cfb34a..1efc48bb2 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxei8.c +++ b/auto-generated/llvm-overloaded-tests/vsuxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg2ei16.c b/auto-generated/llvm-overloaded-tests/vsuxseg2ei16.c index f3120eff2..29252c640 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg2ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg2ei32.c b/auto-generated/llvm-overloaded-tests/vsuxseg2ei32.c index 701352e5d..a8f7220d4 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg2ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg2ei64.c b/auto-generated/llvm-overloaded-tests/vsuxseg2ei64.c index 5038f7711..4ad8bb8b9 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg2ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg2ei8.c b/auto-generated/llvm-overloaded-tests/vsuxseg2ei8.c index 2411b2327..eee7efb27 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg2ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg3ei16.c b/auto-generated/llvm-overloaded-tests/vsuxseg3ei16.c index 19ec63512..72358b80d 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg3ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg3ei32.c b/auto-generated/llvm-overloaded-tests/vsuxseg3ei32.c index 0d8ba91f4..047075c73 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg3ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg3ei64.c b/auto-generated/llvm-overloaded-tests/vsuxseg3ei64.c index 3d50ceea6..6b0bea1b3 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg3ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg3ei8.c b/auto-generated/llvm-overloaded-tests/vsuxseg3ei8.c index d0ac86932..82e515cc5 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg3ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg4ei16.c b/auto-generated/llvm-overloaded-tests/vsuxseg4ei16.c index 349e97489..8b7b46575 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg4ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg4ei32.c b/auto-generated/llvm-overloaded-tests/vsuxseg4ei32.c index d133f60f2..77be371e3 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg4ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg4ei64.c b/auto-generated/llvm-overloaded-tests/vsuxseg4ei64.c index b6f36564c..7bda9e24b 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg4ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg4ei8.c b/auto-generated/llvm-overloaded-tests/vsuxseg4ei8.c index a877dc94b..7659e59d0 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg4ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg5ei16.c b/auto-generated/llvm-overloaded-tests/vsuxseg5ei16.c index f59d83ceb..0a1811e1c 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg5ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg5ei32.c b/auto-generated/llvm-overloaded-tests/vsuxseg5ei32.c index 55329980b..5d4154fcb 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg5ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg5ei64.c b/auto-generated/llvm-overloaded-tests/vsuxseg5ei64.c index 4713f4d87..50ffed379 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg5ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg5ei8.c b/auto-generated/llvm-overloaded-tests/vsuxseg5ei8.c index 9f94327a8..1e35c8213 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg5ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg6ei16.c b/auto-generated/llvm-overloaded-tests/vsuxseg6ei16.c index 74432c2d8..84358a7c8 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg6ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg6ei32.c b/auto-generated/llvm-overloaded-tests/vsuxseg6ei32.c index c240ec510..bfdb9c6aa 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg6ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg6ei64.c b/auto-generated/llvm-overloaded-tests/vsuxseg6ei64.c index 1bb65ab8b..8a5463a75 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg6ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg6ei8.c b/auto-generated/llvm-overloaded-tests/vsuxseg6ei8.c index 80d482387..83f1fe8fc 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg6ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg7ei16.c b/auto-generated/llvm-overloaded-tests/vsuxseg7ei16.c index 61983c35c..251f565d7 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg7ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg7ei32.c b/auto-generated/llvm-overloaded-tests/vsuxseg7ei32.c index ce187e2c6..f0f316ed7 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg7ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg7ei64.c b/auto-generated/llvm-overloaded-tests/vsuxseg7ei64.c index b6602d5b1..28c8902ba 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg7ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg7ei8.c b/auto-generated/llvm-overloaded-tests/vsuxseg7ei8.c index b94f60512..f474b10b5 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg7ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg8ei16.c b/auto-generated/llvm-overloaded-tests/vsuxseg8ei16.c index 6111162b9..db6cd294e 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg8ei16.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg8ei32.c b/auto-generated/llvm-overloaded-tests/vsuxseg8ei32.c index eb5f7ee1e..a10f4167c 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg8ei32.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg8ei64.c b/auto-generated/llvm-overloaded-tests/vsuxseg8ei64.c index 16d60f754..271bfbcdf 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg8ei64.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vsuxseg8ei8.c b/auto-generated/llvm-overloaded-tests/vsuxseg8ei8.c index 3ae76a7d5..16335b590 100644 --- a/auto-generated/llvm-overloaded-tests/vsuxseg8ei8.c +++ b/auto-generated/llvm-overloaded-tests/vsuxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vwmacc.c b/auto-generated/llvm-overloaded-tests/vwmacc.c index 21b1d61b7..3466cccb3 100644 --- a/auto-generated/llvm-overloaded-tests/vwmacc.c +++ b/auto-generated/llvm-overloaded-tests/vwmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vwmaccsu.c b/auto-generated/llvm-overloaded-tests/vwmaccsu.c index 4f05962d0..50b6a51df 100644 --- a/auto-generated/llvm-overloaded-tests/vwmaccsu.c +++ b/auto-generated/llvm-overloaded-tests/vwmaccsu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vwmaccu.c b/auto-generated/llvm-overloaded-tests/vwmaccu.c index b0d8ce766..05b09ea69 100644 --- a/auto-generated/llvm-overloaded-tests/vwmaccu.c +++ b/auto-generated/llvm-overloaded-tests/vwmaccu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/llvm-overloaded-tests/vwmaccus.c b/auto-generated/llvm-overloaded-tests/vwmaccus.c index db831127c..b197b939e 100644 --- a/auto-generated/llvm-overloaded-tests/vwmaccus.c +++ b/auto-generated/llvm-overloaded-tests/vwmaccus.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vcompress.c b/auto-generated/policy_funcs/llvm-api-tests/vcompress.c index a7086168a..ebdee1337 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vcompress.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vcompress.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfabs.c b/auto-generated/policy_funcs/llvm-api-tests/vfabs.c index dcadfa2ba..a903088d8 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfabs.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfabs.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfadd.c b/auto-generated/policy_funcs/llvm-api-tests/vfadd.c index 9109a9bae..fde0ae4c7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfclass.c b/auto-generated/policy_funcs/llvm-api-tests/vfclass.c index 6243dcb90..e412f9ac6 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfclass.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfclass.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfcvt.c b/auto-generated/policy_funcs/llvm-api-tests/vfcvt.c index 2a9bf75d8..1e771a644 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfcvt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfcvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c b/auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c index 3d5bad3c1..76bb0ac58 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfcvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfdiv.c b/auto-generated/policy_funcs/llvm-api-tests/vfdiv.c index 6b39dc20a..288623743 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfdiv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfdiv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vfmacc.c index 567b7dadb..c81d55623 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmadd.c b/auto-generated/policy_funcs/llvm-api-tests/vfmadd.c index 0f3e25db7..fbd2569a4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmax.c b/auto-generated/policy_funcs/llvm-api-tests/vfmax.c index eb2c92fc1..b952d0a2e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmax.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmax.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmerge.c b/auto-generated/policy_funcs/llvm-api-tests/vfmerge.c index f86d83a86..00af5a6c3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmerge.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmerge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmin.c b/auto-generated/policy_funcs/llvm-api-tests/vfmin.c index 860c77008..ab473680b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmin.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmin.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vfmsac.c index ad2d0f6db..5eef22162 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfmsub.c index a2a0e463d..97202c3a7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmul.c b/auto-generated/policy_funcs/llvm-api-tests/vfmul.c index 59c79f7b5..9dbac1615 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmul.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmul.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfmv.c b/auto-generated/policy_funcs/llvm-api-tests/vfmv.c index ed8d7b31a..e5d156a50 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfmv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfncvt.c b/auto-generated/policy_funcs/llvm-api-tests/vfncvt.c index 8f557fa84..551f7cae3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfncvt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfncvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c b/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c index d0a4d264a..868f29c52 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rod.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c b/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c index 274f13d60..64d84100c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfncvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfneg.c b/auto-generated/policy_funcs/llvm-api-tests/vfneg.c index 806b679c2..e4d203d35 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfneg.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfneg.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c index b0320d93d..f46242dd1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfnmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c b/auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c index e3760a218..9c36ecf23 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfnmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c index 7f5661068..ab6eddd87 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c index 529eed440..8eea091af 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfnmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c b/auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c index e4dad4b5a..df5edd680 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfrdiv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfrec7.c b/auto-generated/policy_funcs/llvm-api-tests/vfrec7.c index 6c62463d5..e544f1329 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfrec7.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfrec7.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfredmax.c b/auto-generated/policy_funcs/llvm-api-tests/vfredmax.c index 96e44055c..fdd9e5313 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfredmax.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfredmax.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfredmin.c b/auto-generated/policy_funcs/llvm-api-tests/vfredmin.c index 8d68be880..a8c9303de 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfredmin.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfredmin.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfredosum.c b/auto-generated/policy_funcs/llvm-api-tests/vfredosum.c index 436e01f24..f3f53dedf 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfredosum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfredosum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfredusum.c b/auto-generated/policy_funcs/llvm-api-tests/vfredusum.c index 43f54df5f..fbee58ecc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfredusum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfredusum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c b/auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c index 2f74a970c..626069c1a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfrsqrt7.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfrsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfrsub.c index 5f1c1ded8..3e4f103cb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfrsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfrsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c b/auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c index 926e144d0..a8b414d8e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsgnj.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c b/auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c index d1cb37e31..7e59fa644 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsgnjn.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c b/auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c index 63e290da2..71f8a522a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsgnjx.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c b/auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c index c074c1c6f..e41f03352 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfslide1down.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c b/auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c index bc2af8201..d38f65b01 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfslide1up.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c b/auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c index 88b1d0760..acdc47d45 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsqrt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfsub.c index c8b8985f2..b39f6f678 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwadd.c b/auto-generated/policy_funcs/llvm-api-tests/vfwadd.c index 716dd3d8d..2a42ff056 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c b/auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c index 54b67176b..556843fd4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwcvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c b/auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c index 08d1ac345..7f821ec29 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwcvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c index 11a7a3969..c32736de3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c index 3907f0d30..ed0c3e4c4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwmul.c b/auto-generated/policy_funcs/llvm-api-tests/vfwmul.c index 4cde11fc2..4be35ffdb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwmul.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwmul.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c index 77ae1e831..0612089f9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwnmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c index 0b2811533..fcefa8655 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c b/auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c index ddcf526c2..269edd92e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwredosum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c b/auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c index e21d165dd..eb1d79b82 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwredusum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vfwsub.c b/auto-generated/policy_funcs/llvm-api-tests/vfwsub.c index c02377035..15f01a51d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vfwsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vfwsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle16.c b/auto-generated/policy_funcs/llvm-api-tests/vle16.c index 753f3077c..8f36eb6e7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vle16ff.c index ce6044451..77687ce39 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle32.c b/auto-generated/policy_funcs/llvm-api-tests/vle32.c index 0f03f0331..f0ac63a3f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vle32ff.c index 7a14a5cce..56a12a3ad 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle64.c b/auto-generated/policy_funcs/llvm-api-tests/vle64.c index 8582f9410..ff66485b2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vle64ff.c index 4f33528d0..94473b1b8 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle8.c b/auto-generated/policy_funcs/llvm-api-tests/vle8.c index cb9d473b4..a857f69ae 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vle8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vle8ff.c index b95ffecd4..4f257b992 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vle8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vle8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxei16.c index a03b3dd75..f3c3b4406 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxei32.c index 0fa2a2b4b..3d38f6d41 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxei64.c index fdfe8e95a..3df3a9aca 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxei8.c index 229f3c1fe..fc2dccc91 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c index b9dc6f126..5f7c02fb0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c index a589d196d..8979a6d96 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c index 7dc64ef86..b21767296 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c index f1b570df5..9fec44f14 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c index 65a3e75b9..282893561 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c index a5a89b2a4..9d04fdf23 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c index 914d57f2f..84d0507f1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c index a5cf8f583..35b4993f7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c index 8ec8bd259..7e8f5bffc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c index bf1b4cc94..94704fca0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c index 323f4aed5..68b557ed3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c index aa0132a1e..1901fc16f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c index 60bf7b383..7f921ccb0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c index 7d8ea3433..02aabba41 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c index 7132d189e..b5d4130ff 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c index b1c2addb9..13c16988a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c index 25627fb57..5ef719e68 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c index 908fc5745..a80125ce8 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c index 3ff9b7bfb..0b150b8cd 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c index 4504fd1a2..21d1d7b96 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c index 7b7d9eb09..a5f3e9cbc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c index ca3f42e67..e1a0b5c79 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c index 3e847e4ff..d6360f176 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c index d9bc1a88e..b18436bba 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c index c766ace0a..789117ac3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c index cedbfadc3..3f65b9870 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c index 2081c7820..86ff4b431 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c index a7851c56e..231e3055d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vloxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlse16.c b/auto-generated/policy_funcs/llvm-api-tests/vlse16.c index 0ed276748..8901df015 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlse16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlse16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlse32.c b/auto-generated/policy_funcs/llvm-api-tests/vlse32.c index 92f436fec..cf9f02327 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlse32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlse32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlse64.c b/auto-generated/policy_funcs/llvm-api-tests/vlse64.c index 9f84d0720..bd16581ff 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlse64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlse64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c index ae4ef1668..7194f8fbc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c index c2fbe42a3..3800f34da 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c index a7a18aeb4..f2dd37aba 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c index e7ecbc469..3abcf741e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c index 6516085ef..c07f97d98 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c index a7218bef3..ec4b79c3b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c index 5e22ef490..e817f602d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg2e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c index 646cd3b4d..a7389a98c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c index 42fcaf2b5..b785d2e44 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c index a2592e6db..f2e30c4b8 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c index 27f06ccc0..7abe7bec7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c index 1c2894a1c..a819342b5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c index a6643f8f5..520a58409 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c index 2f50d75dc..1958782e8 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg3e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c index 6bd4e1e57..41091d791 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c index 42d4c3620..6fa59034c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c index 4e79801e2..9a3f61bbd 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c index 82e9c23df..1b526432a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c index c7ab055fb..c0047ad94 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c index ab4d14078..04eacb04a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c index 9adca2196..e2d3a6ecd 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg4e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c index 0e69ae53a..ba31772b7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c index 8ec1c2ebd..238570b7a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c index c9285533e..ff7ae0c39 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c index 64a0ffe92..416d00030 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c index d73abc8c7..0e6defe74 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c index 1f0463160..c8db7a68b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c index 907ed8482..419500c2d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg5e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c index 6fb72e1a7..a91ad7254 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c index fb9222849..e79ca2fed 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c index 573299c38..ed0dc5cd7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c index 7579ff78f..6b0bf3b27 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c index f4f88cb18..33314bb1e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c index 4a919074d..75e9dd0f7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c index f9f74e740..55995c32f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg6e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c index 204a30dd7..35e04558e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c index 28920e5be..127b9c80b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c index 81031cf79..062a9d026 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c index dee7b7717..1ac2d18aa 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c index 9cbc46ff3..ea49db942 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c index a2e28f5a7..c1722bd81 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c index b962005fc..d2fa804e5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg7e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c index 20668ac56..bc7eb6b96 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c index eb3098659..3357428a2 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c index 5dc0cb18e..b03b18dcb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c index 90329199e..1e706c9b0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c index f0d04f2c2..555d44d65 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c index 113a017a4..aaade51e3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c index 8adcc574f..f7db3628c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlseg8e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c index de50cd9fb..69a53246b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c index c77dd94ab..2708bce24 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c index 2910058f7..606957c48 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c index 730a9b93e..0f86e848f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c index 2a2e627d2..8511dc04d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c index 678239a3b..133fbd281 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c index 277c2312a..ee44044c0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c index 0c2d5ef3e..c9a6a6c66 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c index 56a04c418..934bc5cf9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c index b72f11ea7..aa1ee1912 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c index 862192a5d..52927a6eb 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c index 9193a7861..109f296ab 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c index b083be31e..0fd6217af 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c index 80af6a712..e0285e150 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c index 3bda96003..f5e86e1a4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c index 354f87241..e7af8989e 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c index dc78b6aae..3e888e747 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c index c49153e51..d451be79a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c index 4c224afd9..33e20d89b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c index 26d718bc1..8f35c3615 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c index 96162fe24..6294317ae 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vlsseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxei16.c index 5b1122b4e..9e4ac73c4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxei32.c index 04ce6c287..f0b62b910 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxei64.c index 85491fc0e..c22a2f4ec 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxei8.c index 2ee6f69fa..212bc1c77 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c index 2eadd13d2..f07a5165f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c index 7725f4999..0dcebb83f 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c index 5090820ae..e02cb3778 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c index fcf0f0b4c..18181c3df 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c index b8fa21287..8614a187d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c index e4df9e5af..b6ecc6fb7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c index df1e65412..2f594f2b6 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c index 2ac3792b8..1deb31ad0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c index d3cb2a848..f6bbc067b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c index 00e75404a..7f194bff7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c index 845186364..dbdcf8692 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c index d6d67ff8d..b917c432b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c index 79752eda2..db7fc7016 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c index 870f07882..2a95b47d4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c index 5ac7ae1aa..9e7c03347 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c index a783f6343..57b3dc986 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c index 05a3ecb7f..e74a7cd0b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c index 8d6006c07..f5c4a8379 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c index 238da64dc..9cbc1cadf 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c index 3e6141bc1..57ba41d84 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c index a5bd9125b..bc5521e82 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c index 7b82e6bda..0dbb9fcce 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c index 8c6449aa3..a73ef59d3 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c index bce16c276..214016b55 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c index 08f5f6001..3d0951456 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c index efe20db35..557005a61 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c index 249ecb665..e11b05312 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c index 62525eee0..10e67ccd1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vluxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vmacc.c index 224679222..48df2176d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmadd.c b/auto-generated/policy_funcs/llvm-api-tests/vmadd.c index 8cbd12ce1..097444941 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmadd.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmerge.c b/auto-generated/policy_funcs/llvm-api-tests/vmerge.c index 03a64d7ad..2c399bce7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmerge.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmerge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfeq.c b/auto-generated/policy_funcs/llvm-api-tests/vmfeq.c index 9fcddbdcc..39a6bbd17 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfeq.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfeq.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfge.c b/auto-generated/policy_funcs/llvm-api-tests/vmfge.c index b4e0f8d02..fe6a9d0f9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfge.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfgt.c b/auto-generated/policy_funcs/llvm-api-tests/vmfgt.c index 21595f6ef..f30fee119 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfgt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfgt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfle.c b/auto-generated/policy_funcs/llvm-api-tests/vmfle.c index 1b1f8e1eb..e19fce811 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfle.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfle.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmflt.c b/auto-generated/policy_funcs/llvm-api-tests/vmflt.c index 121587e64..847299193 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmflt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmflt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmfne.c b/auto-generated/policy_funcs/llvm-api-tests/vmfne.c index 8529ce921..8a38e2fff 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmfne.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmfne.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmseq.c b/auto-generated/policy_funcs/llvm-api-tests/vmseq.c index 7bd8df825..505150283 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmseq.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmseq.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsge.c b/auto-generated/policy_funcs/llvm-api-tests/vmsge.c index 3a35b27de..7a93cf9f9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsge.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c b/auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c index 10a6b2f82..191666ccf 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsgeu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsgt.c b/auto-generated/policy_funcs/llvm-api-tests/vmsgt.c index 6ae082474..dffc04b9c 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsgt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsgt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c b/auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c index 1bd0be85d..96b221070 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsgtu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsle.c b/auto-generated/policy_funcs/llvm-api-tests/vmsle.c index 24285d266..4998b6b58 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsle.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsle.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsleu.c b/auto-generated/policy_funcs/llvm-api-tests/vmsleu.c index 98b7c7af0..e0911f3e1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsleu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsleu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmslt.c b/auto-generated/policy_funcs/llvm-api-tests/vmslt.c index d21a5009f..3f05df7e4 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmslt.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmslt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsltu.c b/auto-generated/policy_funcs/llvm-api-tests/vmsltu.c index 717394b85..2191e7cc9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsltu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsltu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmsne.c b/auto-generated/policy_funcs/llvm-api-tests/vmsne.c index 59479abb2..d3d5b04d6 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmsne.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmsne.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vmv.c b/auto-generated/policy_funcs/llvm-api-tests/vmv.c index bc366e749..22cfb2bc5 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vmv.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vneg.c b/auto-generated/policy_funcs/llvm-api-tests/vneg.c index a2c70eb7c..2a722c3a9 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vneg.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vneg.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnmsac.c b/auto-generated/policy_funcs/llvm-api-tests/vnmsac.c index 0efa87093..2870097c0 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnmsac.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vnmsub.c b/auto-generated/policy_funcs/llvm-api-tests/vnmsub.c index c391395b2..88ee02668 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vnmsub.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vnmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vrgather.c b/auto-generated/policy_funcs/llvm-api-tests/vrgather.c index 4d719d97e..b1b90ee12 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vrgather.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vrgather.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c b/auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c index 769b627ba..8e42b19fc 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vrgatherei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vslidedown.c b/auto-generated/policy_funcs/llvm-api-tests/vslidedown.c index 223ab188e..370264ff1 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vslidedown.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vslidedown.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vslideup.c b/auto-generated/policy_funcs/llvm-api-tests/vslideup.c index ae9f874e9..7dae87ea7 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vslideup.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vslideup.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmacc.c b/auto-generated/policy_funcs/llvm-api-tests/vwmacc.c index f88c26dcd..9e87f026a 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmacc.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c b/auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c index 072b9ba12..94b6af522 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmaccsu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c b/auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c index 4dbbc6107..dffe2a64d 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmaccu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c b/auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c index 56be15183..21ee75c6b 100644 --- a/auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c +++ b/auto-generated/policy_funcs/llvm-api-tests/vwmaccus.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vcompress.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vcompress.c index b6a4ef19a..aff9afd33 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vcompress.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vcompress.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfabs.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfabs.c index 6ffbd7a63..9c5a9851d 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfabs.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfabs.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfadd.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfadd.c index 886f0eda8..f09e8a69c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfadd.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfclass.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfclass.c index 79128aea9..a104f6762 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfclass.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfclass.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt.c index 39640927d..c8dbd1931 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt_rtz.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt_rtz.c index dc7e3e646..eba687636 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfcvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfdiv.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfdiv.c index 1b0bd9d8c..b6b6e1ae5 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfdiv.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfdiv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmacc.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmacc.c index 718e8db82..1e40ba60f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmacc.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmadd.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmadd.c index c154592a7..56eb76d7e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmadd.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmax.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmax.c index 703490a77..36f423f35 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmax.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmax.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmerge.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmerge.c index 9e31f3039..df748485a 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmerge.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmerge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmin.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmin.c index b9d268770..b9e39913e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmin.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmin.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmsac.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmsac.c index 7d3fdb4f3..c5c6da74c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmsac.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmsub.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmsub.c index 427d12024..7461bea5c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmsub.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmul.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmul.c index e6ad40115..1cf9e0274 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmul.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmul.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmv.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmv.c index 8566da9fd..91038fd65 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfmv.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt.c index 0dee6aa75..cf64ceeb0 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt_rod.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt_rod.c index 8409807a9..0859315d2 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt_rod.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt_rod.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt_rtz.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt_rtz.c index 689a3ac8d..78ca40d8f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfncvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfneg.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfneg.c index ff04891db..ccfec5c7f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfneg.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfneg.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmacc.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmacc.c index 503513ae1..749a7081c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmacc.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmadd.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmadd.c index d53625403..8697c6575 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmadd.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsac.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsac.c index 7ca40def0..9d172fc67 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsac.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsub.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsub.c index 96f2f0066..bccfd5966 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsub.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfnmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfrdiv.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfrdiv.c index 850c399df..1a60fc3e9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfrdiv.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfrdiv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfrec7.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfrec7.c index 7106612b1..fee19f6ab 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfrec7.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfrec7.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfredmax.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfredmax.c index 303024300..0cc4a4040 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfredmax.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfredmax.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfredmin.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfredmin.c index 699643c1a..0fc2fda1d 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfredmin.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfredmin.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfredosum.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfredosum.c index a1dc3d992..d861e8ff7 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfredosum.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfredosum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfredusum.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfredusum.c index dd5a3b915..ae92f6c73 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfredusum.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfredusum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfrsqrt7.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfrsqrt7.c index c02e6426f..161f6fd71 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfrsqrt7.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfrsqrt7.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfrsub.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfrsub.c index 374979b86..9e117f73b 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfrsub.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfrsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnj.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnj.c index e4b54bf08..984f3990c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnj.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnj.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjn.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjn.c index 2d3264905..b6b8148c5 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjn.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjn.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjx.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjx.c index 9cd3ff655..116a9f1eb 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjx.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsgnjx.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfslide1down.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfslide1down.c index 97fd8a8e3..a81895634 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfslide1down.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfslide1down.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfslide1up.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfslide1up.c index 7bdd6f434..778d3905f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfslide1up.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfslide1up.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsqrt.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsqrt.c index a193deb01..0f7a8a365 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsqrt.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsqrt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsub.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsub.c index a435f598e..1c7293cdc 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfsub.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwadd.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwadd.c index a442510aa..e8e9fcd57 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwadd.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt.c index 456e0a331..62719e787 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt_rtz.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt_rtz.c index 2e01384b7..54903e543 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt_rtz.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwcvt_rtz.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmacc.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmacc.c index 1fa53ed3b..52811c1d9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmacc.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmsac.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmsac.c index 35fe067a6..32bb460bf 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmsac.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmul.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmul.c index ad5b2ed2d..84ad7ec30 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmul.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwmul.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmacc.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmacc.c index dc2c53056..02d9e2231 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmacc.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmsac.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmsac.c index 26d4e3367..0c4713abb 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmsac.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwredosum.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwredosum.c index ff84c4aea..1ede7ddee 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwredosum.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwredosum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwredusum.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwredusum.c index d8f8d12c8..a196c15ed 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwredusum.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwredusum.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwsub.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwsub.c index 476c1f0f7..f1b5720d2 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vfwsub.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vfwsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vle16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vle16.c index bc58bf6a5..92f0d41c6 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vle16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vle16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vle16ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vle16ff.c index cc1088f82..4821933f4 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vle16ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vle16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vle32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vle32.c index b52d00c8a..fb9f4cec6 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vle32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vle32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vle32ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vle32ff.c index a5555ac8f..f0385e548 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vle32ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vle32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vle64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vle64.c index 3b5ad37ea..dd94c6cb5 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vle64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vle64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vle64ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vle64ff.c index 5336fa372..f6bf02407 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vle64ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vle64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vle8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vle8.c index 8f1f7d53d..7569c4bad 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vle8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vle8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vle8ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vle8ff.c index dd1ae5d76..4d4860688 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vle8ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vle8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei16.c index 68900a9f8..f754f4ebc 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei32.c index df4b06c57..f9c53643e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei64.c index b46ebded4..f1165b1cf 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei8.c index e485fea80..bd90325d9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c index 9b31232d2..5d01d7a9c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei32.c index 885be317d..89d3ccfae 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei64.c index 4326aff4a..a4849568a 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei8.c index 2dd764779..9c1c879db 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c index 419a9725a..8c1929e7a 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei32.c index a3e875cfa..b599353de 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei64.c index f3830edf5..83ae974a5 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei8.c index a35ba5133..a756b75f9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c index 3f028e3be..e9f55962c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei32.c index d23de5d1d..49dc32d0f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei64.c index 5e9bacab6..1ba0074d5 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei8.c index d4f465301..775d3ec0c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c index 177dd9a8f..b052379c6 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei32.c index 956127665..441ae232a 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei64.c index 469c280d8..8baccbb42 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei8.c index dd07162d3..94635b110 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c index e5d97dc14..8837aa9db 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei32.c index e3cfec40f..48f262498 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei64.c index 6e44c72ce..1d08dafd7 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei8.c index 7ba1fc686..861e32188 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c index d796627e8..62daafe80 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei32.c index 9709aaa73..c91b23ae1 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei64.c index a31726995..e979d407e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei8.c index bdf132c7e..8f82146b9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c index c45f19037..810b69125 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei32.c index cbe5def44..0c0eca097 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei64.c index 442a6821f..8e4eb6c80 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei8.c index 5504d6dd3..e36736b07 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vloxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlse16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlse16.c index 6f6ca153d..5942907a8 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlse16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlse16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlse32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlse32.c index 281cf2829..ba7f6c122 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlse32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlse32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlse64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlse64.c index 774d4ff6a..15d4d2137 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlse64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlse64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16.c index bad1ff817..90a0735fa 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c index 0af913be8..6a266712e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32.c index f558c3de0..c3bcbe4a4 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32ff.c index b80db315d..0cd7a7fbb 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64.c index 66e67b8c0..b4b357079 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64ff.c index 6fa9ac827..ee2988ab5 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e8ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e8ff.c index 075fe72de..b7920bb10 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e8ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg2e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16.c index 26f14a608..68630705d 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c index 8dafe1b68..f73df02fe 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32.c index 1bd7bbdeb..a4908ccd4 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32ff.c index 8c92a0ce2..596cb62db 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64.c index ed63c34cc..808d921b1 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64ff.c index c3a49b399..0e32afca7 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e8ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e8ff.c index 793a75fd5..5679e02fc 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e8ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg3e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16.c index 07ffe2ed9..8bd1880c0 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c index abb48b7a6..e31d52f53 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32.c index ce0484ffc..d3c7a215a 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32ff.c index dd673e4d8..1a677edaf 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64.c index 8ed6f2a53..ff031cb79 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64ff.c index 84dbf9886..490f8e50e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e8ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e8ff.c index fe0b21e61..4d06a608a 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e8ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg4e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16.c index c10a479d6..a90f68010 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c index fa489e9cb..77acd1f59 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32.c index f3cccdd2b..08caaa083 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32ff.c index c12aaeff3..555dba2d4 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64.c index 8a1f4a2e0..1959c266e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64ff.c index 2f1b00880..007215bf4 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e8ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e8ff.c index 978e8bb0c..254b6ad74 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e8ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg5e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16.c index 147ee0e06..1c5533746 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c index a3a9eb43f..bcd616b39 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32.c index ee038c04f..ac212476c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32ff.c index 6e60793a1..e394a7018 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64.c index 8b9f8d031..80885eaf7 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64ff.c index 87fbefe64..693a01220 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e8ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e8ff.c index 13774adc9..ec2d2b477 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e8ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg6e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16.c index 80e77ccc9..8dc5c325d 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c index 44579d3c4..738b94ad7 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32.c index c01fea22f..0fb9776ab 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32ff.c index b6d825bf2..1050dab06 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64.c index afa53e4ed..fcb433e79 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64ff.c index 8f3e9f5c3..99bb3e073 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e8ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e8ff.c index 7def83e8d..bc4edf438 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e8ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg7e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16.c index e7fbced6b..9344c1426 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c index cf1d563d6..f84a3f990 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e16ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32.c index 6a41f6295..27a08d539 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32ff.c index 2ec917025..304c902fd 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e32ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64.c index 0bc7decfb..d9d28bacf 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64ff.c index c1e255a6e..918b4b8e7 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e64ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e8ff.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e8ff.c index 3d50031bb..feb9a5f1e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e8ff.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlseg8e8ff.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c index a0e1396ab..570f2dbfe 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e32.c index b0bf3a7ca..03d192acd 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e64.c index d21d05482..6faddbcea 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg2e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c index 94cf3b903..d80e838cf 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e32.c index 6ddda2f0d..ffe819f1e 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e64.c index 9ef6f640d..923934939 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg3e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c index 271a37d0e..651e263f5 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e32.c index 17a27d148..e8ab6dae8 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e64.c index cb2b52177..c9d6bdfd6 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg4e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c index 57c3bc501..f3ffd10a1 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e32.c index 616d71d37..b14223d91 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e64.c index 1e09d5609..10e896cad 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg5e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c index 8665cc75a..7b33060b0 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e32.c index 01f488283..c757ab332 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e64.c index 4de71167f..13580c9b4 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg6e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c index 64565ce47..ac22bb90b 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e32.c index 9bfc6bd86..a5c204c42 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e64.c index 2d8537f74..696029072 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg7e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c index 4b461fc8e..e4f98578a 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e32.c index 107afa848..927a0047d 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e64.c index 4b8ad78ef..6e6bd3f4b 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vlsseg8e64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei16.c index 899305333..aa6d6bb1c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei32.c index 1292e0cd5..1baca4f94 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei64.c index 99204ac08..ca77cb40c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei8.c index fa6508e05..10f1a1fd9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c index feb68e37f..d8170c810 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei32.c index 7856827a9..adf15ee83 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei64.c index f349c8557..737faa8a8 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei8.c index f9b20297b..6872ecba1 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg2ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c index 12be9a2ef..309a72d9b 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei32.c index f0cc7728f..0a443d5a3 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei64.c index f133a43ae..b06a9b1d9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei8.c index 15fee21d0..021e78325 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg3ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c index 0887b0777..4708aefb1 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei32.c index ac3e8fa87..e8d15d0be 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei64.c index e8441c0fa..802437de9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei8.c index da4921663..c7dd73cb8 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg4ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c index 75b9ff94a..772c92166 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei32.c index d6a1ce318..07ff1c519 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei64.c index d81903000..4e5891f03 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei8.c index 0c810f109..44df0e290 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg5ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c index 408f1266d..7438ad5f9 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei32.c index 3cfb61b22..50452c32c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei64.c index ce0bc0ba8..d1d5f6629 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei8.c index c2b84f581..52f7c8a88 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg6ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c index 51f2a3de6..8c35f499f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei32.c index 81e2baea3..b2f3a6411 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei64.c index 859395b33..8274b160f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei8.c index a6e1f5e0b..2d0c4ec10 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg7ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c index fe54c48ed..bef231c8f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei32.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei32.c index 6b1dce928..8a6cafe4c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei32.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei32.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei64.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei64.c index e9d3285d6..22b164947 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei64.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei64.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei8.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei8.c index 89aa06500..423527a7c 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei8.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vluxseg8ei8.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmacc.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmacc.c index 58a8328ce..a82504f14 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmacc.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmadd.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmadd.c index b5e4ff356..fc2e9768d 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmadd.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmadd.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmerge.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmerge.c index afd49ee32..1dc48c162 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmerge.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmerge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfeq.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfeq.c index 01b3634e3..acfa17f71 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfeq.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfeq.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfge.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfge.c index 5ab824dc0..367e75037 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfge.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfgt.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfgt.c index 9e131cf80..d16d9440a 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfgt.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfgt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfle.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfle.c index a58fbf3c6..e77c6a7c4 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfle.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfle.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmflt.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmflt.c index a73ba48d5..4accb49b4 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmflt.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmflt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfne.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfne.c index 96812be07..71d7b82db 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmfne.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmfne.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmseq.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmseq.c index bf82293e0..ff6da3992 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmseq.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmseq.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsge.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsge.c index 6af191473..0dd52c932 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsge.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsge.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgeu.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgeu.c index 45b51317b..f123546be 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgeu.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgeu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgt.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgt.c index 7587e6854..e89a32bf6 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgt.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgtu.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgtu.c index 786b48b2a..529891165 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgtu.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsgtu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsle.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsle.c index ee16b7151..fee6e23eb 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsle.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsle.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsleu.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsleu.c index 3be414c02..9b1d91935 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsleu.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsleu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmslt.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmslt.c index 8ee4150dd..e17b3d3cd 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmslt.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmslt.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsltu.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsltu.c index 0c8c03ce5..e8e75648b 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsltu.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsltu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsne.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsne.c index a55c04b4e..7ee879bd3 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmsne.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmsne.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vmv.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vmv.c index 16a4e3bf1..5780f04d1 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vmv.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vmv.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vneg.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vneg.c index c7701b57a..e2f9e7ab6 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vneg.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vneg.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vnmsac.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vnmsac.c index 22186e82a..4e569afac 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vnmsac.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vnmsac.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vnmsub.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vnmsub.c index 39372bbdf..0fe74eee1 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vnmsub.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vnmsub.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vrgather.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vrgather.c index cd49beac3..35cbde176 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vrgather.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vrgather.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vrgatherei16.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vrgatherei16.c index 202934e97..e6d490c4f 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vrgatherei16.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vrgatherei16.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vslidedown.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vslidedown.c index cff482041..f02c08159 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vslidedown.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vslidedown.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vslideup.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vslideup.c index 6b8627c36..d9a17dc8b 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vslideup.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vslideup.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vwmacc.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vwmacc.c index 36da79b9b..a3c628743 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vwmacc.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vwmacc.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccsu.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccsu.c index 5f62ac3e5..758aeecf0 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccsu.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccsu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccu.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccu.c index 4de89dc83..dcfae3248 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccu.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccu.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccus.c b/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccus.c index 6f0468ccf..61fbd21bb 100644 --- a/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccus.c +++ b/auto-generated/policy_funcs/llvm-overloaded-tests/vwmaccus.c @@ -1,6 +1,6 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v -target-feature +zfh \ -// RUN: -target-feature +experimental-zvfh -disable-O0-optnone \ +// RUN: -target-feature +zvfh -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s From 04abd6fef1a3945caac275882717f578c2b3af4c Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Mon, 9 Sep 2024 23:33:34 -0700 Subject: [PATCH 130/151] github: update action version for generator - setup-python@v5 - checkout@v4 Signed-off-by: Jerry Zhang Jian --- .github/workflows/generator.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/generator.yml b/.github/workflows/generator.yml index c00aba842..c1c2b9aff 100644 --- a/.github/workflows/generator.yml +++ b/.github/workflows/generator.yml @@ -15,9 +15,9 @@ jobs: matrix: python-version: ["3.9", "3.10", "3.11"] steps: - - uses: actions/checkout@v3 + - uses: actions/checkout@v4 - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v3 + uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} - name: Prerequisites From ece401e090f8eda2bbd2167447902d3e7d31b0b5 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Mon, 9 Sep 2024 23:36:14 -0700 Subject: [PATCH 131/151] github: update action version for gcc-compilation - checkout@v4 Signed-off-by: Jerry Zhang Jian --- .github/workflows/gcc-compilation.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/gcc-compilation.yml b/.github/workflows/gcc-compilation.yml index 211591c24..028c80633 100644 --- a/.github/workflows/gcc-compilation.yml +++ b/.github/workflows/gcc-compilation.yml @@ -10,7 +10,7 @@ jobs: build: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v3 + - uses: actions/checkout@v4 - name: Prerequisites run: sudo apt-get install autoconf automake autotools-dev curl python3 python3-pip From 6ee26d7ee2cba1e9d8a2ac5ecf6353db87854896 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Mon, 9 Sep 2024 23:37:20 -0700 Subject: [PATCH 132/151] github: update action version for clang-compilation - checkout@v4 Signed-off-by: Jerry Zhang Jian --- .github/workflows/clang-compilation.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/clang-compilation.yml b/.github/workflows/clang-compilation.yml index decd40140..e2de8c930 100644 --- a/.github/workflows/clang-compilation.yml +++ b/.github/workflows/clang-compilation.yml @@ -5,7 +5,7 @@ jobs: build: runs-on: ubuntu-latest steps: - - uses: actions/checkout@v3 + - uses: actions/checkout@v4 - name: Prerequisites run: | sudo apt-get install autoconf automake autotools-dev curl python3 python3-pip libmpc-dev libmpfr-dev libgmp-dev gawk build-essential bison flex texinfo gperf libtool patchutils bc zlib1g-dev libexpat-dev ninja-build git cmake libglib2.0-dev dejagnu From 97cb963152319373117058691c0902bba6e6259b Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Mon, 9 Sep 2024 23:51:31 -0700 Subject: [PATCH 133/151] github: update action version for build-pdf - upload-artifact@v4 - download-artifact@v4 Signed-off-by: Jerry Zhang Jian --- .github/workflows/build-pdf.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/build-pdf.yml b/.github/workflows/build-pdf.yml index b0a75da6f..168bcfcef 100644 --- a/.github/workflows/build-pdf.yml +++ b/.github/workflows/build-pdf.yml @@ -25,7 +25,7 @@ jobs: run: make -C doc build - name: Upload artifact - uses: actions/upload-artifact@v2 + uses: actions/upload-artifact@v4 with: name: v-intrinsic-spec.pdf path: doc/v-intrinsic-spec.pdf @@ -73,7 +73,7 @@ jobs: runs-on: ubuntu-latest steps: - name: Download artifact - uses: actions/download-artifact@v2 + uses: actions/download-artifact@v4 with: name: v-intrinsic-spec.pdf path: ./doc From 01db6b13dae2122ba146e082db155449e6c3630e Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 12 Sep 2024 02:38:07 -0700 Subject: [PATCH 134/151] [NFC] Put bfloat type into constants Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/bfloat16_inst.py | 57 ++++++++++--------- .../rvv_intrinsic_gen/constants.py | 1 + 2 files changed, 30 insertions(+), 28 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py index 7f4a79b3d..9ab9918ef 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py @@ -31,11 +31,10 @@ from templates import misc_op_template from templates import cvt_op_template from templates import mac_template -from constants import LMULS, WLMULS, NCVTLMULS +from constants import LMULS, WLMULS, NCVTLMULS, BFTYPES SEWS = [16] NSEWS = [32] -TYPES = ["bfloat"] def gen(g): @@ -47,33 +46,33 @@ def gen(g): g.start_group("BFloat16 Vector Loads and Stores Intrinsics") g.function_group(load_template, "Vector Unit-Stride Load Intrinsics", - "bf16-vector-unit-stride-load", ["vle"], TYPES, SEWS, LMULS, - decorators.has_masking_maskedoff_policy) + "bf16-vector-unit-stride-load", ["vle"], BFTYPES, SEWS, + LMULS, decorators.has_masking_maskedoff_policy) g.function_group(store_template, "Vector Unit-Stride Store Intrinsics", - "bf16-vector-unit-stride-store", ["vse"], TYPES, SEWS, LMULS, - decorators.has_masking_no_maskedoff) + "bf16-vector-unit-stride-store", ["vse"], BFTYPES, SEWS, + LMULS, decorators.has_masking_no_maskedoff) g.function_group(load_template, "Vector Strided Load Intrinsics", - "vector-strided-load", ["vlse"], TYPES, SEWS, LMULS, + "vector-strided-load", ["vlse"], BFTYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy) g.function_group(store_template, "Vector Strided Store Intrinsics", - "vector-strided-store", ["vsse"], TYPES, SEWS, LMULS, + "vector-strided-store", ["vsse"], BFTYPES, SEWS, LMULS, decorators.has_masking_no_maskedoff) g.function_group(load_template, "Vector Indexed Load Intrinsics", - "vector-indexed-load", ["vloxei", "vluxei"], TYPES, SEWS, + "vector-indexed-load", ["vloxei", "vluxei"], BFTYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy) g.function_group(store_template, "Vector Indexed Store Intrinsics", - "vector-indexed-store", ["vsoxei", "vsuxei"], TYPES, SEWS, + "vector-indexed-store", ["vsoxei", "vsuxei"], BFTYPES, SEWS, LMULS, decorators.has_masking_no_maskedoff) g.function_group(load_template, "Unit-stride Fault-Only-First Loads Intrinsics", - "unit-stride-fault-only-first-loads", ["vleff"], TYPES, SEWS, - LMULS, decorators.has_masking_maskedoff_policy) + "unit-stride-fault-only-first-loads", ["vleff"], BFTYPES, + SEWS, LMULS, decorators.has_masking_maskedoff_policy) #################################################################### g.start_group("BFloat16 Vector Loads and Stores Segment Intrinsics") @@ -81,30 +80,32 @@ def gen(g): g.function_group(seg_load_template, "Vector Unit-Stride Segment Load Intrinsics", "vector-unit-stride-segment-load", ["vlseg", "vlsegff"], - TYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy) + BFTYPES, SEWS, LMULS, + decorators.has_masking_maskedoff_policy) g.function_group(seg_store_template, "Vector Unit-Stride Segment Store Intrinsics", - "vecrtor-unit-stride-segment-store", ["vsseg"], TYPES, SEWS, - LMULS, decorators.has_masking_no_maskedoff) + "vecrtor-unit-stride-segment-store", ["vsseg"], BFTYPES, + SEWS, LMULS, decorators.has_masking_no_maskedoff) g.function_group(seg_load_template, "Vector Strided Segment Load Intrinsics", - "vector-strided-segment-load", ["vlsseg"], TYPES, SEWS, + "vector-strided-segment-load", ["vlsseg"], BFTYPES, SEWS, LMULS, decorators.has_masking_maskedoff_policy) g.function_group(seg_store_template, "Vector Strided Segment Store Intrinsics", - "vector-strided-segment-store", ["vssseg"], TYPES, SEWS, + "vector-strided-segment-store", ["vssseg"], BFTYPES, SEWS, LMULS, decorators.has_masking_no_maskedoff) g.function_group(seg_load_template, "Vector Indexed Segment Load Intrinsics", - "vector-indexed-segment-load", ["vloxseg", "vluxseg"], TYPES, - SEWS, LMULS, decorators.has_masking_maskedoff_policy) + "vector-indexed-segment-load", ["vloxseg", "vluxseg"], + BFTYPES, SEWS, LMULS, + decorators.has_masking_maskedoff_policy) g.function_group(seg_store_template, "Vector Indexed Segment Store Intrinsics", "vector-indexed-segment-store", ["vsoxseg", "vsuxseg"], - TYPES, SEWS, LMULS, decorators.has_masking_no_maskedoff) + BFTYPES, SEWS, LMULS, decorators.has_masking_no_maskedoff) #################################################################### g.start_group("BFloat16 Convert Intrinsics") @@ -123,7 +124,7 @@ def gen(g): g.function_group(mac_template, "Vector Widening Multiply-Accumulate Intrinsics", - "bf16-widening-multiply-accumulate", ["wmaccbf16"], TYPES, + "bf16-widening-multiply-accumulate", ["wmaccbf16"], BFTYPES, SEWS, WLMULS, decorators.has_masking_no_maskedoff_policy_frm) #################################################################### @@ -134,27 +135,27 @@ def gen(g): SEWS, LMULS, decorators.has_no_masking) g.function_group(misc_op_template, "Vector LMUL Extension Intrinsics", - "vector-lmul-extensionn", ["vlmul_ext_v"], TYPES, SEWS, + "vector-lmul-extensionn", ["vlmul_ext_v"], BFTYPES, SEWS, LMULS, decorators.has_no_masking) g.function_group(misc_op_template, "Vector LMUL Truncation Intrinsics", - "vector-lmul-truncation", ["vlmul_trunc_v"], TYPES, SEWS, + "vector-lmul-truncation", ["vlmul_trunc_v"], BFTYPES, SEWS, LMULS, decorators.has_no_masking) g.function_group(misc_op_template, "Vector Initialization Intrinsics", - "vector-initialization", ["vundefined"], TYPES, SEWS, LMULS, - decorators.has_no_masking) + "vector-initialization", ["vundefined"], BFTYPES, SEWS, + LMULS, decorators.has_no_masking) g.function_group(get_set_diff_lmul_op_template, "Vector Insertion Intrinsics", - "vector-insertion", ["vset"], TYPES, SEWS, LMULS, + "vector-insertion", ["vset"], BFTYPES, SEWS, LMULS, decorators.has_no_masking) g.function_group(get_set_diff_lmul_op_template, "Vector Extraction Intrinsics", "vector-extraction", - ["vget"], TYPES, SEWS, LMULS, decorators.has_no_masking) + ["vget"], BFTYPES, SEWS, LMULS, decorators.has_no_masking) g.function_group(misc_op_template, "Vector Creation Intrinsics", - "vector-creation", ["vcreate"], TYPES, SEWS, LMULS, + "vector-creation", ["vcreate"], BFTYPES, SEWS, LMULS, decorators.has_no_masking) #################################################################### diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/constants.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/constants.py index 5d3f20c6c..0895181eb 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/constants.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/constants.py @@ -30,5 +30,6 @@ ITYPES = ["int", "uint"] UITYPE = ["uint"] FTYPES = ["float"] +BFTYPES = ["bfloat"] MTYPES = ["bool"] MLENS = [1, 2, 4, 8, 16, 32, 64] From 39926314598b7ff3b45e73914f4d1b84d44aacc2 Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Fri, 2 Aug 2024 04:34:45 -0700 Subject: [PATCH 135/151] Support vmv.v.v, vfmv.v.f and vmerge.vvm for bf16 with `zvfbfmin` --- .../rvv_intrinsic_gen/bfloat16_inst.py | 8 ++++++++ .../rvv_intrinsic_gen/templates/unary_op_template.py | 9 +++++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py index 9ab9918ef..dbbd92cad 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py @@ -29,6 +29,7 @@ from templates import reint_op_template from templates import get_set_diff_lmul_op_template from templates import misc_op_template +from templates import unary_op_template from templates import cvt_op_template from templates import mac_template from constants import LMULS, WLMULS, NCVTLMULS, BFTYPES @@ -126,6 +127,13 @@ def gen(g): "Vector Widening Multiply-Accumulate Intrinsics", "bf16-widening-multiply-accumulate", ["wmaccbf16"], BFTYPES, SEWS, WLMULS, decorators.has_masking_no_maskedoff_policy_frm) + g.function_group(unary_op_template, "Vector BFloat16 Move Intrinsics", + "vector-bf16-move", ["mv"], TYPES, SEWS, LMULS, + decorators.has_no_masking_policy) + + g.function_group(unary_op_template, "Vector BFloat16 Merge Intrinsics", + "vector-bf16-merge", ["merge"], TYPES, SEWS, LMULS, + decorators.has_no_masking_policy) #################################################################### g.start_group("BFloat16 Miscellaneous Vector Utility Intrinsics") diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py index c0eef1f0f..de3515061 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py @@ -49,7 +49,7 @@ def render(G, if op in ["zext", "sext"]: break - if data_type == "float": + if data_type == "float" or data_type == "bfloat": args["S_TYPE"] = "f" args["OP"] = "f" + args["OP"] inst_type_vvsm = InstType.VVFM @@ -91,7 +91,8 @@ def render(G, # for float type, accrdoing current naming scheming it # should be vmv_v_v, same for vmerge.vvm. vv_args = args - if data_type == "float" and op in ["mv", "merge"]: + if (data_type == "float" or + data_type == "bfloat") and op in ["mv", "merge"]: vv_args = copy.deepcopy(args) vv_args["OP"] = "v" + op @@ -111,6 +112,10 @@ def render(G, vs1=type_helper.v, v0=type_helper.m, vl=type_helper.size_t) + + if data_type == "bfloat": + continue + G.func( inst_info_vvsm, name="{OP}_v{S_TYPE}m_{TYPE}{SEW}m{LMUL}".format_map(args) + From a609a54cec77ff990fbecb96ab18e0a867451fca Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Fri, 2 Aug 2024 04:38:27 -0700 Subject: [PATCH 136/151] [Auto-gen] Update bfloat16 tests under ../auto-generated. (make git-commit-autogen-bf16-test) --- auto-generated/bfloat16/api-testing/vfmv.c | 26 +++++++++++ auto-generated/bfloat16/api-testing/vmerge.c | 32 ++++++++++++++ auto-generated/bfloat16/api-testing/vmv.c | 26 +++++++++++ auto-generated/bfloat16/llvm-api-tests/vfmv.c | 32 ++++++++++++++ .../bfloat16/llvm-api-tests/vmerge.c | 38 ++++++++++++++++ auto-generated/bfloat16/llvm-api-tests/vmv.c | 32 ++++++++++++++ .../bfloat16/llvm-overloaded-tests/vmerge.c | 38 ++++++++++++++++ .../bfloat16/llvm-overloaded-tests/vmv.c | 32 ++++++++++++++ .../bfloat16/overloaded-api-testing/vmerge.c | 32 ++++++++++++++ .../bfloat16/overloaded-api-testing/vmv.c | 26 +++++++++++ .../bfloat16/policy_funcs/api-testing/vfmv.c | 28 ++++++++++++ .../policy_funcs/api-testing/vmerge.c | 38 ++++++++++++++++ .../bfloat16/policy_funcs/api-testing/vmv.c | 32 ++++++++++++++ .../policy_funcs/llvm-api-tests/vfmv.c | 32 ++++++++++++++ .../policy_funcs/llvm-api-tests/vmerge.c | 32 ++++++++++++++ .../policy_funcs/llvm-api-tests/vmv.c | 32 ++++++++++++++ .../policy_funcs/llvm-overloaded-tests/vfmv.c | 34 ++++++++++++++ .../llvm-overloaded-tests/vmerge.c | 44 +++++++++++++++++++ .../policy_funcs/llvm-overloaded-tests/vmv.c | 38 ++++++++++++++++ .../overloaded-api-testing/vfmv.c | 28 ++++++++++++ .../overloaded-api-testing/vmerge.c | 38 ++++++++++++++++ .../policy_funcs/overloaded-api-testing/vmv.c | 32 ++++++++++++++ 22 files changed, 722 insertions(+) create mode 100644 auto-generated/bfloat16/api-testing/vfmv.c create mode 100644 auto-generated/bfloat16/api-testing/vmerge.c create mode 100644 auto-generated/bfloat16/api-testing/vmv.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vfmv.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vmerge.c create mode 100644 auto-generated/bfloat16/llvm-api-tests/vmv.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vmerge.c create mode 100644 auto-generated/bfloat16/llvm-overloaded-tests/vmv.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vmerge.c create mode 100644 auto-generated/bfloat16/overloaded-api-testing/vmv.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vfmv.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vmerge.c create mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vmv.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmerge.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmv.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmerge.c create mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmv.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfmv.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vmerge.c create mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vmv.c diff --git a/auto-generated/bfloat16/api-testing/vfmv.c b/auto-generated/bfloat16/api-testing/vfmv.c new file mode 100644 index 000000000..91a330cda --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vfmv.c @@ -0,0 +1,26 @@ +#include +#include + +vbfloat16mf4_t test_vfmv_v_f_bf16mf4(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16mf4(rs1, vl); +} + +vbfloat16mf2_t test_vfmv_v_f_bf16mf2(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16mf2(rs1, vl); +} + +vbfloat16m1_t test_vfmv_v_f_bf16m1(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m1(rs1, vl); +} + +vbfloat16m2_t test_vfmv_v_f_bf16m2(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m2(rs1, vl); +} + +vbfloat16m4_t test_vfmv_v_f_bf16m4(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m4(rs1, vl); +} + +vbfloat16m8_t test_vfmv_v_f_bf16m8(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m8(rs1, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vmerge.c b/auto-generated/bfloat16/api-testing/vmerge.c new file mode 100644 index 000000000..871f44021 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vmerge.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4_t test_vmerge_vvm_bf16mf4(vbfloat16mf4_t vs2, vbfloat16mf4_t vs1, + vbool64_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16mf4(vs2, vs1, v0, vl); +} + +vbfloat16mf2_t test_vmerge_vvm_bf16mf2(vbfloat16mf2_t vs2, vbfloat16mf2_t vs1, + vbool32_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16mf2(vs2, vs1, v0, vl); +} + +vbfloat16m1_t test_vmerge_vvm_bf16m1(vbfloat16m1_t vs2, vbfloat16m1_t vs1, + vbool16_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m1(vs2, vs1, v0, vl); +} + +vbfloat16m2_t test_vmerge_vvm_bf16m2(vbfloat16m2_t vs2, vbfloat16m2_t vs1, + vbool8_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m2(vs2, vs1, v0, vl); +} + +vbfloat16m4_t test_vmerge_vvm_bf16m4(vbfloat16m4_t vs2, vbfloat16m4_t vs1, + vbool4_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m4(vs2, vs1, v0, vl); +} + +vbfloat16m8_t test_vmerge_vvm_bf16m8(vbfloat16m8_t vs2, vbfloat16m8_t vs1, + vbool2_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m8(vs2, vs1, v0, vl); +} diff --git a/auto-generated/bfloat16/api-testing/vmv.c b/auto-generated/bfloat16/api-testing/vmv.c new file mode 100644 index 000000000..a9f0cfb12 --- /dev/null +++ b/auto-generated/bfloat16/api-testing/vmv.c @@ -0,0 +1,26 @@ +#include +#include + +vbfloat16mf4_t test_vmv_v_v_bf16mf4(vbfloat16mf4_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16mf4(vs1, vl); +} + +vbfloat16mf2_t test_vmv_v_v_bf16mf2(vbfloat16mf2_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16mf2(vs1, vl); +} + +vbfloat16m1_t test_vmv_v_v_bf16m1(vbfloat16m1_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m1(vs1, vl); +} + +vbfloat16m2_t test_vmv_v_v_bf16m2(vbfloat16m2_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m2(vs1, vl); +} + +vbfloat16m4_t test_vmv_v_v_bf16m4(vbfloat16m4_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m4(vs1, vl); +} + +vbfloat16m8_t test_vmv_v_v_bf16m8(vbfloat16m8_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m8(vs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vfmv.c b/auto-generated/bfloat16/llvm-api-tests/vfmv.c new file mode 100644 index 000000000..4aa30e018 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vfmv.c @@ -0,0 +1,32 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vfmv_v_f_bf16mf4(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16mf4(rs1, vl); +} + +vbfloat16mf2_t test_vfmv_v_f_bf16mf2(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16mf2(rs1, vl); +} + +vbfloat16m1_t test_vfmv_v_f_bf16m1(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m1(rs1, vl); +} + +vbfloat16m2_t test_vfmv_v_f_bf16m2(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m2(rs1, vl); +} + +vbfloat16m4_t test_vfmv_v_f_bf16m4(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m4(rs1, vl); +} + +vbfloat16m8_t test_vfmv_v_f_bf16m8(__bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m8(rs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vmerge.c b/auto-generated/bfloat16/llvm-api-tests/vmerge.c new file mode 100644 index 000000000..87dd321c9 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vmerge.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vmerge_vvm_bf16mf4(vbfloat16mf4_t vs2, vbfloat16mf4_t vs1, + vbool64_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16mf4(vs2, vs1, v0, vl); +} + +vbfloat16mf2_t test_vmerge_vvm_bf16mf2(vbfloat16mf2_t vs2, vbfloat16mf2_t vs1, + vbool32_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16mf2(vs2, vs1, v0, vl); +} + +vbfloat16m1_t test_vmerge_vvm_bf16m1(vbfloat16m1_t vs2, vbfloat16m1_t vs1, + vbool16_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m1(vs2, vs1, v0, vl); +} + +vbfloat16m2_t test_vmerge_vvm_bf16m2(vbfloat16m2_t vs2, vbfloat16m2_t vs1, + vbool8_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m2(vs2, vs1, v0, vl); +} + +vbfloat16m4_t test_vmerge_vvm_bf16m4(vbfloat16m4_t vs2, vbfloat16m4_t vs1, + vbool4_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m4(vs2, vs1, v0, vl); +} + +vbfloat16m8_t test_vmerge_vvm_bf16m8(vbfloat16m8_t vs2, vbfloat16m8_t vs1, + vbool2_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m8(vs2, vs1, v0, vl); +} diff --git a/auto-generated/bfloat16/llvm-api-tests/vmv.c b/auto-generated/bfloat16/llvm-api-tests/vmv.c new file mode 100644 index 000000000..0e059a186 --- /dev/null +++ b/auto-generated/bfloat16/llvm-api-tests/vmv.c @@ -0,0 +1,32 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vmv_v_v_bf16mf4(vbfloat16mf4_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16mf4(vs1, vl); +} + +vbfloat16mf2_t test_vmv_v_v_bf16mf2(vbfloat16mf2_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16mf2(vs1, vl); +} + +vbfloat16m1_t test_vmv_v_v_bf16m1(vbfloat16m1_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m1(vs1, vl); +} + +vbfloat16m2_t test_vmv_v_v_bf16m2(vbfloat16m2_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m2(vs1, vl); +} + +vbfloat16m4_t test_vmv_v_v_bf16m4(vbfloat16m4_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m4(vs1, vl); +} + +vbfloat16m8_t test_vmv_v_v_bf16m8(vbfloat16m8_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m8(vs1, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vmerge.c b/auto-generated/bfloat16/llvm-overloaded-tests/vmerge.c new file mode 100644 index 000000000..b594ab655 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vmerge.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vmerge_vvm_bf16mf4(vbfloat16mf4_t vs2, vbfloat16mf4_t vs1, + vbool64_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16mf2_t test_vmerge_vvm_bf16mf2(vbfloat16mf2_t vs2, vbfloat16mf2_t vs1, + vbool32_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16m1_t test_vmerge_vvm_bf16m1(vbfloat16m1_t vs2, vbfloat16m1_t vs1, + vbool16_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16m2_t test_vmerge_vvm_bf16m2(vbfloat16m2_t vs2, vbfloat16m2_t vs1, + vbool8_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16m4_t test_vmerge_vvm_bf16m4(vbfloat16m4_t vs2, vbfloat16m4_t vs1, + vbool4_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16m8_t test_vmerge_vvm_bf16m8(vbfloat16m8_t vs2, vbfloat16m8_t vs1, + vbool2_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vmv.c b/auto-generated/bfloat16/llvm-overloaded-tests/vmv.c new file mode 100644 index 000000000..8fb01aa64 --- /dev/null +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vmv.c @@ -0,0 +1,32 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vmv_v_v_bf16mf4(vbfloat16mf4_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16mf2_t test_vmv_v_v_bf16mf2(vbfloat16mf2_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16m1_t test_vmv_v_v_bf16m1(vbfloat16m1_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16m2_t test_vmv_v_v_bf16m2(vbfloat16m2_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16m4_t test_vmv_v_v_bf16m4(vbfloat16m4_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16m8_t test_vmv_v_v_bf16m8(vbfloat16m8_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vmerge.c b/auto-generated/bfloat16/overloaded-api-testing/vmerge.c new file mode 100644 index 000000000..6f5617436 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vmerge.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4_t test_vmerge_vvm_bf16mf4(vbfloat16mf4_t vs2, vbfloat16mf4_t vs1, + vbool64_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16mf2_t test_vmerge_vvm_bf16mf2(vbfloat16mf2_t vs2, vbfloat16mf2_t vs1, + vbool32_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16m1_t test_vmerge_vvm_bf16m1(vbfloat16m1_t vs2, vbfloat16m1_t vs1, + vbool16_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16m2_t test_vmerge_vvm_bf16m2(vbfloat16m2_t vs2, vbfloat16m2_t vs1, + vbool8_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16m4_t test_vmerge_vvm_bf16m4(vbfloat16m4_t vs2, vbfloat16m4_t vs1, + vbool4_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} + +vbfloat16m8_t test_vmerge_vvm_bf16m8(vbfloat16m8_t vs2, vbfloat16m8_t vs1, + vbool2_t v0, size_t vl) { + return __riscv_vmerge(vs2, vs1, v0, vl); +} diff --git a/auto-generated/bfloat16/overloaded-api-testing/vmv.c b/auto-generated/bfloat16/overloaded-api-testing/vmv.c new file mode 100644 index 000000000..0c227a944 --- /dev/null +++ b/auto-generated/bfloat16/overloaded-api-testing/vmv.c @@ -0,0 +1,26 @@ +#include +#include + +vbfloat16mf4_t test_vmv_v_v_bf16mf4(vbfloat16mf4_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16mf2_t test_vmv_v_v_bf16mf2(vbfloat16mf2_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16m1_t test_vmv_v_v_bf16m1(vbfloat16m1_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16m2_t test_vmv_v_v_bf16m2(vbfloat16m2_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16m4_t test_vmv_v_v_bf16m4(vbfloat16m4_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} + +vbfloat16m8_t test_vmv_v_v_bf16m8(vbfloat16m8_t vs1, size_t vl) { + return __riscv_vmv_v(vs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vfmv.c b/auto-generated/bfloat16/policy_funcs/api-testing/vfmv.c new file mode 100644 index 000000000..60bf77dec --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vfmv.c @@ -0,0 +1,28 @@ +#include +#include + +vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, + size_t vl) { + return __riscv_vfmv_v_f_bf16mf4_tu(vd, rs1, vl); +} + +vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, + size_t vl) { + return __riscv_vfmv_v_f_bf16mf2_tu(vd, rs1, vl); +} + +vbfloat16m1_t test_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m1_tu(vd, rs1, vl); +} + +vbfloat16m2_t test_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m2_tu(vd, rs1, vl); +} + +vbfloat16m4_t test_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m4_tu(vd, rs1, vl); +} + +vbfloat16m8_t test_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m8_tu(vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vmerge.c b/auto-generated/bfloat16/policy_funcs/api-testing/vmerge.c new file mode 100644 index 000000000..9e28ee542 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vmerge.c @@ -0,0 +1,38 @@ +#include +#include + +vbfloat16mf4_t test_vmerge_vvm_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, + size_t vl) { + return __riscv_vmerge_vvm_bf16mf4_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16mf2_t test_vmerge_vvm_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, + size_t vl) { + return __riscv_vmerge_vvm_bf16mf2_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m1_t test_vmerge_vvm_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, + vbfloat16m1_t vs1, vbool16_t v0, + size_t vl) { + return __riscv_vmerge_vvm_bf16m1_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m2_t test_vmerge_vvm_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, + vbfloat16m2_t vs1, vbool8_t v0, + size_t vl) { + return __riscv_vmerge_vvm_bf16m2_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m4_t test_vmerge_vvm_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, + vbfloat16m4_t vs1, vbool4_t v0, + size_t vl) { + return __riscv_vmerge_vvm_bf16m4_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m8_t test_vmerge_vvm_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, + vbfloat16m8_t vs1, vbool2_t v0, + size_t vl) { + return __riscv_vmerge_vvm_bf16m8_tu(vd, vs2, vs1, v0, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vmv.c b/auto-generated/bfloat16/policy_funcs/api-testing/vmv.c new file mode 100644 index 000000000..c1bb53556 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/api-testing/vmv.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4_t test_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, + size_t vl) { + return __riscv_vmv_v_v_bf16mf4_tu(vd, vs1, vl); +} + +vbfloat16mf2_t test_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, + size_t vl) { + return __riscv_vmv_v_v_bf16mf2_tu(vd, vs1, vl); +} + +vbfloat16m1_t test_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, + size_t vl) { + return __riscv_vmv_v_v_bf16m1_tu(vd, vs1, vl); +} + +vbfloat16m2_t test_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, + size_t vl) { + return __riscv_vmv_v_v_bf16m2_tu(vd, vs1, vl); +} + +vbfloat16m4_t test_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, + size_t vl) { + return __riscv_vmv_v_v_bf16m4_tu(vd, vs1, vl); +} + +vbfloat16m8_t test_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, + size_t vl) { + return __riscv_vmv_v_v_bf16m8_tu(vd, vs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c new file mode 100644 index 000000000..a4d82c885 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c @@ -0,0 +1,32 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16mf4_tu(vd, rs1, vl); +} + +vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16mf2_tu(vd, rs1, vl); +} + +vbfloat16m1_t test_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m1_tu(vd, rs1, vl); +} + +vbfloat16m2_t test_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m2_tu(vd, rs1, vl); +} + +vbfloat16m4_t test_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m4_tu(vd, rs1, vl); +} + +vbfloat16m8_t test_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_f_bf16m8_tu(vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmerge.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmerge.c new file mode 100644 index 000000000..e38fd03ab --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmerge.c @@ -0,0 +1,32 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vmerge_vvm_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs2, vbfloat16mf4_t vs1, vbool64_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16mf4_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16mf2_t test_vmerge_vvm_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs2, vbfloat16mf2_t vs1, vbool32_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16mf2_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m1_t test_vmerge_vvm_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, vbfloat16m1_t vs1, vbool16_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m1_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m2_t test_vmerge_vvm_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, vbfloat16m2_t vs1, vbool8_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m2_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m4_t test_vmerge_vvm_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, vbfloat16m4_t vs1, vbool4_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m4_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m8_t test_vmerge_vvm_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, vbfloat16m8_t vs1, vbool2_t v0, size_t vl) { + return __riscv_vmerge_vvm_bf16m8_tu(vd, vs2, vs1, v0, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmv.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmv.c new file mode 100644 index 000000000..404840847 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmv.c @@ -0,0 +1,32 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16mf4_tu(vd, vs1, vl); +} + +vbfloat16mf2_t test_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16mf2_tu(vd, vs1, vl); +} + +vbfloat16m1_t test_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m1_tu(vd, vs1, vl); +} + +vbfloat16m2_t test_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m2_tu(vd, vs1, vl); +} + +vbfloat16m4_t test_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m4_tu(vd, vs1, vl); +} + +vbfloat16m8_t test_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, size_t vl) { + return __riscv_vmv_v_v_bf16m8_tu(vd, vs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c new file mode 100644 index 000000000..81928bd0b --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c @@ -0,0 +1,34 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, + size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, + size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16m1_t test_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16m2_t test_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16m4_t test_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16m8_t test_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmerge.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmerge.c new file mode 100644 index 000000000..75b25e3dd --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmerge.c @@ -0,0 +1,44 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vmerge_vvm_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16mf2_t test_vmerge_vvm_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m1_t test_vmerge_vvm_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, + vbfloat16m1_t vs1, vbool16_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m2_t test_vmerge_vvm_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, + vbfloat16m2_t vs1, vbool8_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m4_t test_vmerge_vvm_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, + vbfloat16m4_t vs1, vbool4_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m8_t test_vmerge_vvm_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, + vbfloat16m8_t vs1, vbool2_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmv.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmv.c new file mode 100644 index 000000000..fffe66a5a --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmv.c @@ -0,0 +1,38 @@ +// REQUIRES: riscv-registered-target +// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ +// RUN: -target-feature +experimental-zvfbfmin \ +// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ +// RUN: FileCheck --check-prefix=CHECK-RV64 %s + +#include + +vbfloat16mf4_t test_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16mf2_t test_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16m1_t test_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16m2_t test_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16m4_t test_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16m8_t test_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfmv.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfmv.c new file mode 100644 index 000000000..220bd16fa --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfmv.c @@ -0,0 +1,28 @@ +#include +#include + +vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, + size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, + size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16m1_t test_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16m2_t test_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16m4_t test_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} + +vbfloat16m8_t test_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl) { + return __riscv_vfmv_v_tu(vd, rs1, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vmerge.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vmerge.c new file mode 100644 index 000000000..73c4cffb7 --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vmerge.c @@ -0,0 +1,38 @@ +#include +#include + +vbfloat16mf4_t test_vmerge_vvm_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16mf2_t test_vmerge_vvm_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m1_t test_vmerge_vvm_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, + vbfloat16m1_t vs1, vbool16_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m2_t test_vmerge_vvm_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, + vbfloat16m2_t vs1, vbool8_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m4_t test_vmerge_vvm_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, + vbfloat16m4_t vs1, vbool4_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} + +vbfloat16m8_t test_vmerge_vvm_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, + vbfloat16m8_t vs1, vbool2_t v0, + size_t vl) { + return __riscv_vmerge_tu(vd, vs2, vs1, v0, vl); +} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vmv.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vmv.c new file mode 100644 index 000000000..dfc0514fc --- /dev/null +++ b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vmv.c @@ -0,0 +1,32 @@ +#include +#include + +vbfloat16mf4_t test_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16mf2_t test_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16m1_t test_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16m2_t test_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16m4_t test_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} + +vbfloat16m8_t test_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, + size_t vl) { + return __riscv_vmv_v_tu(vd, vs1, vl); +} From 2b22d9be996ca20d8b6b5be40d5a044adc70fd1c Mon Sep 17 00:00:00 2001 From: Brandon Wu Date: Fri, 2 Aug 2024 04:38:32 -0700 Subject: [PATCH 137/151] [Auto-gen] Update bfloat16 documents under ../auto-generated. (make git-commit-autogen-bf16-doc) --- auto-generated/bfloat16/intrinsic_funcs.adoc | 40 +++++++++++++ .../03_bfloat16_arithmetic_intrinsics.adoc | 40 +++++++++++++ .../bfloat16/overloaded_intrinsic_funcs.adoc | 32 ++++++++++ .../03_bfloat16_arithmetic_intrinsics.adoc | 32 ++++++++++ .../policy_funcs/intrinsic_funcs.adoc | 58 +++++++++++++++++++ .../03_bfloat16_arithmetic_intrinsics.adoc | 58 +++++++++++++++++++ .../overloaded_intrinsic_funcs.adoc | 40 +++++++++++++ .../03_bfloat16_arithmetic_intrinsics.adoc | 40 +++++++++++++ 8 files changed, 340 insertions(+) diff --git a/auto-generated/bfloat16/intrinsic_funcs.adoc b/auto-generated/bfloat16/intrinsic_funcs.adoc index 3bd1a4222..b649b9570 100644 --- a/auto-generated/bfloat16/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs.adoc @@ -1543,6 +1543,46 @@ vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, unsigned int frm, size_t vl); ---- +[[vector-bf16-move]] +==== Vector BFloat16 Move Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmv_v_v_bf16mf4(vbfloat16mf4_t vs1, size_t vl); +vbfloat16mf4_t __riscv_vfmv_v_f_bf16mf4(__bf16 rs1, size_t vl); +vbfloat16mf2_t __riscv_vmv_v_v_bf16mf2(vbfloat16mf2_t vs1, size_t vl); +vbfloat16mf2_t __riscv_vfmv_v_f_bf16mf2(__bf16 rs1, size_t vl); +vbfloat16m1_t __riscv_vmv_v_v_bf16m1(vbfloat16m1_t vs1, size_t vl); +vbfloat16m1_t __riscv_vfmv_v_f_bf16m1(__bf16 rs1, size_t vl); +vbfloat16m2_t __riscv_vmv_v_v_bf16m2(vbfloat16m2_t vs1, size_t vl); +vbfloat16m2_t __riscv_vfmv_v_f_bf16m2(__bf16 rs1, size_t vl); +vbfloat16m4_t __riscv_vmv_v_v_bf16m4(vbfloat16m4_t vs1, size_t vl); +vbfloat16m4_t __riscv_vfmv_v_f_bf16m4(__bf16 rs1, size_t vl); +vbfloat16m8_t __riscv_vmv_v_v_bf16m8(vbfloat16m8_t vs1, size_t vl); +vbfloat16m8_t __riscv_vfmv_v_f_bf16m8(__bf16 rs1, size_t vl); +---- + +[[vector-bf16-merge]] +==== Vector BFloat16 Merge Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmerge_vvm_bf16mf4(vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, + size_t vl); +vbfloat16mf2_t __riscv_vmerge_vvm_bf16mf2(vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, + size_t vl); +vbfloat16m1_t __riscv_vmerge_vvm_bf16m1(vbfloat16m1_t vs2, vbfloat16m1_t vs1, + vbool16_t v0, size_t vl); +vbfloat16m2_t __riscv_vmerge_vvm_bf16m2(vbfloat16m2_t vs2, vbfloat16m2_t vs1, + vbool8_t v0, size_t vl); +vbfloat16m4_t __riscv_vmerge_vvm_bf16m4(vbfloat16m4_t vs2, vbfloat16m4_t vs1, + vbool4_t v0, size_t vl); +vbfloat16m8_t __riscv_vmerge_vvm_bf16m8(vbfloat16m8_t vs2, vbfloat16m8_t vs1, + vbool2_t v0, size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc index 830e11a4b..87c32b581 100644 --- a/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -127,3 +127,43 @@ vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, unsigned int frm, size_t vl); ---- + +[[vector-bf16-move]] +==== Vector BFloat16 Move Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmv_v_v_bf16mf4(vbfloat16mf4_t vs1, size_t vl); +vbfloat16mf4_t __riscv_vfmv_v_f_bf16mf4(__bf16 rs1, size_t vl); +vbfloat16mf2_t __riscv_vmv_v_v_bf16mf2(vbfloat16mf2_t vs1, size_t vl); +vbfloat16mf2_t __riscv_vfmv_v_f_bf16mf2(__bf16 rs1, size_t vl); +vbfloat16m1_t __riscv_vmv_v_v_bf16m1(vbfloat16m1_t vs1, size_t vl); +vbfloat16m1_t __riscv_vfmv_v_f_bf16m1(__bf16 rs1, size_t vl); +vbfloat16m2_t __riscv_vmv_v_v_bf16m2(vbfloat16m2_t vs1, size_t vl); +vbfloat16m2_t __riscv_vfmv_v_f_bf16m2(__bf16 rs1, size_t vl); +vbfloat16m4_t __riscv_vmv_v_v_bf16m4(vbfloat16m4_t vs1, size_t vl); +vbfloat16m4_t __riscv_vfmv_v_f_bf16m4(__bf16 rs1, size_t vl); +vbfloat16m8_t __riscv_vmv_v_v_bf16m8(vbfloat16m8_t vs1, size_t vl); +vbfloat16m8_t __riscv_vfmv_v_f_bf16m8(__bf16 rs1, size_t vl); +---- + +[[vector-bf16-merge]] +==== Vector BFloat16 Merge Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmerge_vvm_bf16mf4(vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, + size_t vl); +vbfloat16mf2_t __riscv_vmerge_vvm_bf16mf2(vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, + size_t vl); +vbfloat16m1_t __riscv_vmerge_vvm_bf16m1(vbfloat16m1_t vs2, vbfloat16m1_t vs1, + vbool16_t v0, size_t vl); +vbfloat16m2_t __riscv_vmerge_vvm_bf16m2(vbfloat16m2_t vs2, vbfloat16m2_t vs1, + vbool8_t v0, size_t vl); +vbfloat16m4_t __riscv_vmerge_vvm_bf16m4(vbfloat16m4_t vs2, vbfloat16m4_t vs1, + vbool4_t v0, size_t vl); +vbfloat16m8_t __riscv_vmerge_vvm_bf16m8(vbfloat16m8_t vs2, vbfloat16m8_t vs1, + vbool2_t v0, size_t vl); +---- diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc index b5200a485..270a88ffa 100644 --- a/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs.adoc @@ -1123,6 +1123,38 @@ vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, size_t vl); ---- +[[overloaded-vector-bf16-move]] +==== Vector BFloat16 Move Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmv_v(vbfloat16mf4_t vs1, size_t vl); +vbfloat16mf2_t __riscv_vmv_v(vbfloat16mf2_t vs1, size_t vl); +vbfloat16m1_t __riscv_vmv_v(vbfloat16m1_t vs1, size_t vl); +vbfloat16m2_t __riscv_vmv_v(vbfloat16m2_t vs1, size_t vl); +vbfloat16m4_t __riscv_vmv_v(vbfloat16m4_t vs1, size_t vl); +vbfloat16m8_t __riscv_vmv_v(vbfloat16m8_t vs1, size_t vl); +---- + +[[overloaded-vector-bf16-merge]] +==== Vector BFloat16 Merge Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmerge(vbfloat16mf4_t vs2, vbfloat16mf4_t vs1, + vbool64_t v0, size_t vl); +vbfloat16mf2_t __riscv_vmerge(vbfloat16mf2_t vs2, vbfloat16mf2_t vs1, + vbool32_t v0, size_t vl); +vbfloat16m1_t __riscv_vmerge(vbfloat16m1_t vs2, vbfloat16m1_t vs1, vbool16_t v0, + size_t vl); +vbfloat16m2_t __riscv_vmerge(vbfloat16m2_t vs2, vbfloat16m2_t vs1, vbool8_t v0, + size_t vl); +vbfloat16m4_t __riscv_vmerge(vbfloat16m4_t vs2, vbfloat16m4_t vs1, vbool4_t v0, + size_t vl); +vbfloat16m8_t __riscv_vmerge(vbfloat16m8_t vs2, vbfloat16m8_t vs1, vbool2_t v0, + size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[overloaded-reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc index f62b14fba..01a26a747 100644 --- a/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc +++ b/auto-generated/bfloat16/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -111,3 +111,35 @@ vfloat32m8_t __riscv_vfwmaccbf16(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, unsigned int frm, size_t vl); ---- + +[[overloaded-vector-bf16-move]] +==== Vector BFloat16 Move Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmv_v(vbfloat16mf4_t vs1, size_t vl); +vbfloat16mf2_t __riscv_vmv_v(vbfloat16mf2_t vs1, size_t vl); +vbfloat16m1_t __riscv_vmv_v(vbfloat16m1_t vs1, size_t vl); +vbfloat16m2_t __riscv_vmv_v(vbfloat16m2_t vs1, size_t vl); +vbfloat16m4_t __riscv_vmv_v(vbfloat16m4_t vs1, size_t vl); +vbfloat16m8_t __riscv_vmv_v(vbfloat16m8_t vs1, size_t vl); +---- + +[[overloaded-vector-bf16-merge]] +==== Vector BFloat16 Merge Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmerge(vbfloat16mf4_t vs2, vbfloat16mf4_t vs1, + vbool64_t v0, size_t vl); +vbfloat16mf2_t __riscv_vmerge(vbfloat16mf2_t vs2, vbfloat16mf2_t vs1, + vbool32_t v0, size_t vl); +vbfloat16m1_t __riscv_vmerge(vbfloat16m1_t vs2, vbfloat16m1_t vs1, vbool16_t v0, + size_t vl); +vbfloat16m2_t __riscv_vmerge(vbfloat16m2_t vs2, vbfloat16m2_t vs1, vbool8_t v0, + size_t vl); +vbfloat16m4_t __riscv_vmerge(vbfloat16m4_t vs2, vbfloat16m4_t vs1, vbool4_t v0, + size_t vl); +vbfloat16m8_t __riscv_vmerge(vbfloat16m8_t vs2, vbfloat16m8_t vs1, vbool2_t v0, + size_t vl); +---- diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc index 37161ceff..d9d08e3a6 100644 --- a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc @@ -2855,6 +2855,64 @@ vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, unsigned int frm, size_t vl); ---- +[[policy-variant-vector-bf16-move]] +==== Vector BFloat16 Move Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, + size_t vl); +vbfloat16mf4_t __riscv_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, + size_t vl); +vbfloat16mf2_t __riscv_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, + size_t vl); +vbfloat16mf2_t __riscv_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, + size_t vl); +vbfloat16m1_t __riscv_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, + size_t vl); +vbfloat16m1_t __riscv_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, + size_t vl); +vbfloat16m2_t __riscv_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, + size_t vl); +vbfloat16m2_t __riscv_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, + size_t vl); +vbfloat16m4_t __riscv_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, + size_t vl); +vbfloat16m4_t __riscv_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, + size_t vl); +vbfloat16m8_t __riscv_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, + size_t vl); +vbfloat16m8_t __riscv_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, + size_t vl); +---- + +[[policy-variant-vector-bf16-merge]] +==== Vector BFloat16 Merge Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmerge_vvm_bf16mf4_tu(vbfloat16mf4_t vd, + vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, + size_t vl); +vbfloat16mf2_t __riscv_vmerge_vvm_bf16mf2_tu(vbfloat16mf2_t vd, + vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, + size_t vl); +vbfloat16m1_t __riscv_vmerge_vvm_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, + vbfloat16m1_t vs1, vbool16_t v0, + size_t vl); +vbfloat16m2_t __riscv_vmerge_vvm_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, + vbfloat16m2_t vs1, vbool8_t v0, + size_t vl); +vbfloat16m4_t __riscv_vmerge_vvm_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, + vbfloat16m4_t vs1, vbool4_t v0, + size_t vl); +vbfloat16m8_t __riscv_vmerge_vvm_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, + vbfloat16m8_t vs1, vbool2_t v0, + size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[policy-variant-reinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc index 15acd4a2c..4889c6e03 100644 --- a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -271,3 +271,61 @@ vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, unsigned int frm, size_t vl); ---- + +[[policy-variant-vector-bf16-move]] +==== Vector BFloat16 Move Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, + size_t vl); +vbfloat16mf4_t __riscv_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, + size_t vl); +vbfloat16mf2_t __riscv_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, + size_t vl); +vbfloat16mf2_t __riscv_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, + size_t vl); +vbfloat16m1_t __riscv_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, + size_t vl); +vbfloat16m1_t __riscv_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, + size_t vl); +vbfloat16m2_t __riscv_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, + size_t vl); +vbfloat16m2_t __riscv_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, + size_t vl); +vbfloat16m4_t __riscv_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, + size_t vl); +vbfloat16m4_t __riscv_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, + size_t vl); +vbfloat16m8_t __riscv_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, + size_t vl); +vbfloat16m8_t __riscv_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, + size_t vl); +---- + +[[policy-variant-vector-bf16-merge]] +==== Vector BFloat16 Merge Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmerge_vvm_bf16mf4_tu(vbfloat16mf4_t vd, + vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, + size_t vl); +vbfloat16mf2_t __riscv_vmerge_vvm_bf16mf2_tu(vbfloat16mf2_t vd, + vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, + size_t vl); +vbfloat16m1_t __riscv_vmerge_vvm_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, + vbfloat16m1_t vs1, vbool16_t v0, + size_t vl); +vbfloat16m2_t __riscv_vmerge_vvm_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, + vbfloat16m2_t vs1, vbool8_t v0, + size_t vl); +vbfloat16m4_t __riscv_vmerge_vvm_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, + vbfloat16m4_t vs1, vbool4_t v0, + size_t vl); +vbfloat16m8_t __riscv_vmerge_vvm_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, + vbfloat16m8_t vs1, vbool2_t v0, + size_t vl); +---- diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc index 266e06b4c..2b6578d84 100644 --- a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc @@ -2069,6 +2069,46 @@ vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, size_t vl); ---- +[[policy-variant-overloadedvector-bf16-move]] +==== Vector BFloat16 Move Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmv_v_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, + size_t vl); +vbfloat16mf4_t __riscv_vfmv_v_tu(vbfloat16mf4_t vd, __bf16 rs1, size_t vl); +vbfloat16mf2_t __riscv_vmv_v_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, + size_t vl); +vbfloat16mf2_t __riscv_vfmv_v_tu(vbfloat16mf2_t vd, __bf16 rs1, size_t vl); +vbfloat16m1_t __riscv_vmv_v_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, size_t vl); +vbfloat16m1_t __riscv_vfmv_v_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl); +vbfloat16m2_t __riscv_vmv_v_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, size_t vl); +vbfloat16m2_t __riscv_vfmv_v_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl); +vbfloat16m4_t __riscv_vmv_v_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, size_t vl); +vbfloat16m4_t __riscv_vfmv_v_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl); +vbfloat16m8_t __riscv_vmv_v_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, size_t vl); +vbfloat16m8_t __riscv_vfmv_v_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl); +---- + +[[policy-variant-overloadedvector-bf16-merge]] +==== Vector BFloat16 Merge Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmerge_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, size_t vl); +vbfloat16mf2_t __riscv_vmerge_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, size_t vl); +vbfloat16m1_t __riscv_vmerge_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, + vbfloat16m1_t vs1, vbool16_t v0, size_t vl); +vbfloat16m2_t __riscv_vmerge_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, + vbfloat16m2_t vs1, vbool8_t v0, size_t vl); +vbfloat16m4_t __riscv_vmerge_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, + vbfloat16m4_t vs1, vbool4_t v0, size_t vl); +vbfloat16m8_t __riscv_vmerge_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, + vbfloat16m8_t vs1, vbool2_t v0, size_t vl); +---- + === BFloat16 Miscellaneous Vector Utility Intrinsics [[policy-variant-overloadedreinterpret-cast-conversion]] diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc index 64c886112..3f586b00a 100644 --- a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -230,3 +230,43 @@ vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, vbfloat16m4_t vs2, unsigned int frm, size_t vl); ---- + +[[policy-variant-overloadedvector-bf16-move]] +==== Vector BFloat16 Move Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmv_v_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, + size_t vl); +vbfloat16mf4_t __riscv_vfmv_v_tu(vbfloat16mf4_t vd, __bf16 rs1, size_t vl); +vbfloat16mf2_t __riscv_vmv_v_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, + size_t vl); +vbfloat16mf2_t __riscv_vfmv_v_tu(vbfloat16mf2_t vd, __bf16 rs1, size_t vl); +vbfloat16m1_t __riscv_vmv_v_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, size_t vl); +vbfloat16m1_t __riscv_vfmv_v_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl); +vbfloat16m2_t __riscv_vmv_v_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, size_t vl); +vbfloat16m2_t __riscv_vfmv_v_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl); +vbfloat16m4_t __riscv_vmv_v_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, size_t vl); +vbfloat16m4_t __riscv_vfmv_v_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl); +vbfloat16m8_t __riscv_vmv_v_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, size_t vl); +vbfloat16m8_t __riscv_vfmv_v_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl); +---- + +[[policy-variant-overloadedvector-bf16-merge]] +==== Vector BFloat16 Merge Intrinsics + +[,c] +---- +vbfloat16mf4_t __riscv_vmerge_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, size_t vl); +vbfloat16mf2_t __riscv_vmerge_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, size_t vl); +vbfloat16m1_t __riscv_vmerge_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, + vbfloat16m1_t vs1, vbool16_t v0, size_t vl); +vbfloat16m2_t __riscv_vmerge_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, + vbfloat16m2_t vs1, vbool8_t v0, size_t vl); +vbfloat16m4_t __riscv_vmerge_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, + vbfloat16m4_t vs1, vbool4_t v0, size_t vl); +vbfloat16m8_t __riscv_vmerge_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, + vbfloat16m8_t vs1, vbool2_t v0, size_t vl); +---- From 81cb7ab6e3b93131c4cd81e0bd4e3dd110200222 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 1 Oct 2024 15:26:56 +0800 Subject: [PATCH 138/151] bfloat16: TYPES -> BFTYPES Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py index dbbd92cad..77e47908c 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/bfloat16_inst.py @@ -128,11 +128,11 @@ def gen(g): "bf16-widening-multiply-accumulate", ["wmaccbf16"], BFTYPES, SEWS, WLMULS, decorators.has_masking_no_maskedoff_policy_frm) g.function_group(unary_op_template, "Vector BFloat16 Move Intrinsics", - "vector-bf16-move", ["mv"], TYPES, SEWS, LMULS, + "vector-bf16-move", ["mv"], BFTYPES, SEWS, LMULS, decorators.has_no_masking_policy) g.function_group(unary_op_template, "Vector BFloat16 Merge Intrinsics", - "vector-bf16-merge", ["merge"], TYPES, SEWS, LMULS, + "vector-bf16-merge", ["merge"], BFTYPES, SEWS, LMULS, decorators.has_no_masking_policy) #################################################################### From 4f4724b3ac3667e4558ccfb649b7a9bf883189a2 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 1 Oct 2024 15:31:41 +0800 Subject: [PATCH 139/151] template/unary_op: add op assertion to address type-check failure Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/templates/unary_op_template.py | 1 + 1 file changed, 1 insertion(+) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py index de3515061..c5989579a 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py @@ -43,6 +43,7 @@ def render(G, for decorator in decorator_list: decorator.write_text_header(G) for args in prod(OP=op_list, TYPE=type_list, SEW=sew_list, LMUL=lmul_list): + assert args["OP"] is not None data_type = args["TYPE"] op = args["OP"] From 4e5bcda639443981eb666ed920d75a467d69b2e5 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 1 Oct 2024 15:32:24 +0800 Subject: [PATCH 140/151] [NFC] template/unary_op: address pylint failures Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/templates/unary_op_template.py | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py index c5989579a..b950c33d2 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py @@ -50,7 +50,7 @@ def render(G, if op in ["zext", "sext"]: break - if data_type == "float" or data_type == "bfloat": + if data_type in ["float", "bfloat"]: args["S_TYPE"] = "f" args["OP"] = "f" + args["OP"] inst_type_vvsm = InstType.VVFM @@ -92,8 +92,7 @@ def render(G, # for float type, accrdoing current naming scheming it # should be vmv_v_v, same for vmerge.vvm. vv_args = args - if (data_type == "float" or - data_type == "bfloat") and op in ["mv", "merge"]: + if data_type in ["float", "bfloat"] and op in ["mv", "merge"]: vv_args = copy.deepcopy(args) vv_args["OP"] = "v" + op From f2463160f85c35d3c30e6da6c5dd64bb8e47c7fd Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Tue, 1 Oct 2024 15:33:28 +0800 Subject: [PATCH 141/151] [Auto-gen] Update bfloat16 documents under ../auto-generated. (make git-commit-autogen-bf16-doc) --- auto-generated/bfloat16/llvm-api-tests/vfmv.c | 4 +-- .../bfloat16/llvm-api-tests/vmerge.c | 4 +-- auto-generated/bfloat16/llvm-api-tests/vmv.c | 4 +-- .../bfloat16/llvm-overloaded-tests/vmerge.c | 4 +-- .../bfloat16/llvm-overloaded-tests/vmv.c | 4 +-- .../policy_funcs/llvm-api-tests/vfmv.c | 10 ++++--- .../policy_funcs/llvm-api-tests/vmerge.c | 28 +++++++++++++------ .../policy_funcs/llvm-api-tests/vmv.c | 22 +++++++++------ .../policy_funcs/llvm-overloaded-tests/vfmv.c | 4 +-- .../llvm-overloaded-tests/vmerge.c | 4 +-- .../policy_funcs/llvm-overloaded-tests/vmv.c | 4 +-- 11 files changed, 56 insertions(+), 36 deletions(-) diff --git a/auto-generated/bfloat16/llvm-api-tests/vfmv.c b/auto-generated/bfloat16/llvm-api-tests/vfmv.c index 4aa30e018..127e9508e 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vfmv.c +++ b/auto-generated/bfloat16/llvm-api-tests/vfmv.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vmerge.c b/auto-generated/bfloat16/llvm-api-tests/vmerge.c index 87dd321c9..c2962bf98 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vmerge.c +++ b/auto-generated/bfloat16/llvm-api-tests/vmerge.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-api-tests/vmv.c b/auto-generated/bfloat16/llvm-api-tests/vmv.c index 0e059a186..b0b0f2bd7 100644 --- a/auto-generated/bfloat16/llvm-api-tests/vmv.c +++ b/auto-generated/bfloat16/llvm-api-tests/vmv.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vmerge.c b/auto-generated/bfloat16/llvm-overloaded-tests/vmerge.c index b594ab655..d0056e2b5 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vmerge.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vmerge.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/llvm-overloaded-tests/vmv.c b/auto-generated/bfloat16/llvm-overloaded-tests/vmv.c index 8fb01aa64..43e6807cf 100644 --- a/auto-generated/bfloat16/llvm-overloaded-tests/vmv.c +++ b/auto-generated/bfloat16/llvm-overloaded-tests/vmv.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c index a4d82c885..c1537f987 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c @@ -1,17 +1,19 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, size_t vl) { +vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, + size_t vl) { return __riscv_vfmv_v_f_bf16mf4_tu(vd, rs1, vl); } -vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, size_t vl) { +vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, + size_t vl) { return __riscv_vfmv_v_f_bf16mf2_tu(vd, rs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmerge.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmerge.c index e38fd03ab..d8381141a 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmerge.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmerge.c @@ -1,32 +1,44 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vbfloat16mf4_t test_vmerge_vvm_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs2, vbfloat16mf4_t vs1, vbool64_t v0, size_t vl) { +vbfloat16mf4_t test_vmerge_vvm_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs2, + vbfloat16mf4_t vs1, vbool64_t v0, + size_t vl) { return __riscv_vmerge_vvm_bf16mf4_tu(vd, vs2, vs1, v0, vl); } -vbfloat16mf2_t test_vmerge_vvm_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs2, vbfloat16mf2_t vs1, vbool32_t v0, size_t vl) { +vbfloat16mf2_t test_vmerge_vvm_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs2, + vbfloat16mf2_t vs1, vbool32_t v0, + size_t vl) { return __riscv_vmerge_vvm_bf16mf2_tu(vd, vs2, vs1, v0, vl); } -vbfloat16m1_t test_vmerge_vvm_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, vbfloat16m1_t vs1, vbool16_t v0, size_t vl) { +vbfloat16m1_t test_vmerge_vvm_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs2, + vbfloat16m1_t vs1, vbool16_t v0, + size_t vl) { return __riscv_vmerge_vvm_bf16m1_tu(vd, vs2, vs1, v0, vl); } -vbfloat16m2_t test_vmerge_vvm_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, vbfloat16m2_t vs1, vbool8_t v0, size_t vl) { +vbfloat16m2_t test_vmerge_vvm_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs2, + vbfloat16m2_t vs1, vbool8_t v0, + size_t vl) { return __riscv_vmerge_vvm_bf16m2_tu(vd, vs2, vs1, v0, vl); } -vbfloat16m4_t test_vmerge_vvm_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, vbfloat16m4_t vs1, vbool4_t v0, size_t vl) { +vbfloat16m4_t test_vmerge_vvm_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs2, + vbfloat16m4_t vs1, vbool4_t v0, + size_t vl) { return __riscv_vmerge_vvm_bf16m4_tu(vd, vs2, vs1, v0, vl); } -vbfloat16m8_t test_vmerge_vvm_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, vbfloat16m8_t vs1, vbool2_t v0, size_t vl) { +vbfloat16m8_t test_vmerge_vvm_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs2, + vbfloat16m8_t vs1, vbool2_t v0, + size_t vl) { return __riscv_vmerge_vvm_bf16m8_tu(vd, vs2, vs1, v0, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmv.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmv.c index 404840847..b4d091c72 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmv.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vmv.c @@ -1,32 +1,38 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s #include -vbfloat16mf4_t test_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, size_t vl) { +vbfloat16mf4_t test_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, + size_t vl) { return __riscv_vmv_v_v_bf16mf4_tu(vd, vs1, vl); } -vbfloat16mf2_t test_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, size_t vl) { +vbfloat16mf2_t test_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, + size_t vl) { return __riscv_vmv_v_v_bf16mf2_tu(vd, vs1, vl); } -vbfloat16m1_t test_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, size_t vl) { +vbfloat16m1_t test_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, + size_t vl) { return __riscv_vmv_v_v_bf16m1_tu(vd, vs1, vl); } -vbfloat16m2_t test_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, size_t vl) { +vbfloat16m2_t test_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, + size_t vl) { return __riscv_vmv_v_v_bf16m2_tu(vd, vs1, vl); } -vbfloat16m4_t test_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, size_t vl) { +vbfloat16m4_t test_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, + size_t vl) { return __riscv_vmv_v_v_bf16m4_tu(vd, vs1, vl); } -vbfloat16m8_t test_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, size_t vl) { +vbfloat16m8_t test_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, + size_t vl) { return __riscv_vmv_v_v_bf16m8_tu(vd, vs1, vl); } diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c index 81928bd0b..cf30abe49 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmerge.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmerge.c index 75b25e3dd..d4b73a2fa 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmerge.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmerge.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmv.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmv.c index fffe66a5a..36f83611f 100644 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmv.c +++ b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vmv.c @@ -1,7 +1,7 @@ // REQUIRES: riscv-registered-target // RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +experimental-zvfbfmin \ -// RUN: -target-feature +experimental-zvfbfwma -disable-O0-optnone \ +// RUN: -target-feature +zvfbfmin \ +// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ // RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ // RUN: FileCheck --check-prefix=CHECK-RV64 %s From 7fc89c433b6244272d11e3c006bb77425fb4088f Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Mon, 7 Oct 2024 21:17:21 +0800 Subject: [PATCH 142/151] github: fix clang-compilation ci (#371) Signed-off-by: Jerry Zhang Jian --- .github/workflows/clang-compilation.yml | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/.github/workflows/clang-compilation.yml b/.github/workflows/clang-compilation.yml index e2de8c930..74a114829 100644 --- a/.github/workflows/clang-compilation.yml +++ b/.github/workflows/clang-compilation.yml @@ -6,13 +6,15 @@ jobs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 + - uses: actions/setup-python@v5 + with: + python-version: '3.11' - name: Prerequisites run: | - sudo apt-get install autoconf automake autotools-dev curl python3 python3-pip libmpc-dev libmpfr-dev libgmp-dev gawk build-essential bison flex texinfo gperf libtool patchutils bc zlib1g-dev libexpat-dev ninja-build git cmake libglib2.0-dev dejagnu + sudo apt-get install autoconf automake autotools-dev curl libmpc-dev libmpfr-dev libgmp-dev gawk build-essential bison flex texinfo gperf libtool patchutils bc zlib1g-dev libexpat-dev ninja-build git cmake libglib2.0-dev dejagnu - name: Install dependencies run: | - python -m pip install --upgrade pip - pip install junitparser + pip install --user junitparser - name: Download LLVM run: | cd .. From 8e9304d2f6c8f8553753e3e18e9e11aa7220bd40 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 9 Oct 2024 01:16:47 -0700 Subject: [PATCH 143/151] unary_op: remove vfmv for Bfloat16 Signed-off-by: Jerry Zhang Jian --- .../rvv_intrinsic_gen/templates/unary_op_template.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py index b950c33d2..7f657bbe5 100644 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/templates/unary_op_template.py @@ -50,7 +50,7 @@ def render(G, if op in ["zext", "sext"]: break - if data_type in ["float", "bfloat"]: + if data_type in ["float"]: args["S_TYPE"] = "f" args["OP"] = "f" + args["OP"] inst_type_vvsm = InstType.VVFM @@ -137,6 +137,8 @@ def render(G, **decorator.tu_dest_args(type_helper.v), vs1=type_helper.v, vl=type_helper.size_t) + if data_type == "bfloat": + continue G.func( inst_info_vs, name="{OP}_v_{S_TYPE}_{TYPE}{SEW}m{LMUL}".format_map(args) + From a5a5bd67805da15be2bb790f08c92ee581feb845 Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 9 Oct 2024 01:18:50 -0700 Subject: [PATCH 144/151] [Auto-gen] Update bfloat16 documents under ../auto-generated. (make git-commit-autogen-bf16-doc) --- auto-generated/bfloat16/api-testing/vfmv.c | 26 -------------- auto-generated/bfloat16/intrinsic_funcs.adoc | 6 ---- .../03_bfloat16_arithmetic_intrinsics.adoc | 6 ---- auto-generated/bfloat16/llvm-api-tests/vfmv.c | 32 ----------------- .../bfloat16/policy_funcs/api-testing/vfmv.c | 28 --------------- .../policy_funcs/intrinsic_funcs.adoc | 12 ------- .../03_bfloat16_arithmetic_intrinsics.adoc | 12 ------- .../policy_funcs/llvm-api-tests/vfmv.c | 34 ------------------- .../policy_funcs/llvm-overloaded-tests/vfmv.c | 34 ------------------- .../overloaded-api-testing/vfmv.c | 28 --------------- .../overloaded_intrinsic_funcs.adoc | 6 ---- .../03_bfloat16_arithmetic_intrinsics.adoc | 6 ---- 12 files changed, 230 deletions(-) delete mode 100644 auto-generated/bfloat16/api-testing/vfmv.c delete mode 100644 auto-generated/bfloat16/llvm-api-tests/vfmv.c delete mode 100644 auto-generated/bfloat16/policy_funcs/api-testing/vfmv.c delete mode 100644 auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c delete mode 100644 auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c delete mode 100644 auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfmv.c diff --git a/auto-generated/bfloat16/api-testing/vfmv.c b/auto-generated/bfloat16/api-testing/vfmv.c deleted file mode 100644 index 91a330cda..000000000 --- a/auto-generated/bfloat16/api-testing/vfmv.c +++ /dev/null @@ -1,26 +0,0 @@ -#include -#include - -vbfloat16mf4_t test_vfmv_v_f_bf16mf4(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16mf4(rs1, vl); -} - -vbfloat16mf2_t test_vfmv_v_f_bf16mf2(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16mf2(rs1, vl); -} - -vbfloat16m1_t test_vfmv_v_f_bf16m1(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m1(rs1, vl); -} - -vbfloat16m2_t test_vfmv_v_f_bf16m2(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m2(rs1, vl); -} - -vbfloat16m4_t test_vfmv_v_f_bf16m4(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m4(rs1, vl); -} - -vbfloat16m8_t test_vfmv_v_f_bf16m8(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m8(rs1, vl); -} diff --git a/auto-generated/bfloat16/intrinsic_funcs.adoc b/auto-generated/bfloat16/intrinsic_funcs.adoc index b649b9570..d1f272329 100644 --- a/auto-generated/bfloat16/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs.adoc @@ -1549,17 +1549,11 @@ vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, [,c] ---- vbfloat16mf4_t __riscv_vmv_v_v_bf16mf4(vbfloat16mf4_t vs1, size_t vl); -vbfloat16mf4_t __riscv_vfmv_v_f_bf16mf4(__bf16 rs1, size_t vl); vbfloat16mf2_t __riscv_vmv_v_v_bf16mf2(vbfloat16mf2_t vs1, size_t vl); -vbfloat16mf2_t __riscv_vfmv_v_f_bf16mf2(__bf16 rs1, size_t vl); vbfloat16m1_t __riscv_vmv_v_v_bf16m1(vbfloat16m1_t vs1, size_t vl); -vbfloat16m1_t __riscv_vfmv_v_f_bf16m1(__bf16 rs1, size_t vl); vbfloat16m2_t __riscv_vmv_v_v_bf16m2(vbfloat16m2_t vs1, size_t vl); -vbfloat16m2_t __riscv_vfmv_v_f_bf16m2(__bf16 rs1, size_t vl); vbfloat16m4_t __riscv_vmv_v_v_bf16m4(vbfloat16m4_t vs1, size_t vl); -vbfloat16m4_t __riscv_vfmv_v_f_bf16m4(__bf16 rs1, size_t vl); vbfloat16m8_t __riscv_vmv_v_v_bf16m8(vbfloat16m8_t vs1, size_t vl); -vbfloat16m8_t __riscv_vfmv_v_f_bf16m8(__bf16 rs1, size_t vl); ---- [[vector-bf16-merge]] diff --git a/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc index 87c32b581..558919dae 100644 --- a/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc +++ b/auto-generated/bfloat16/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -134,17 +134,11 @@ vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_m(vbool4_t vm, vfloat32m8_t vd, [,c] ---- vbfloat16mf4_t __riscv_vmv_v_v_bf16mf4(vbfloat16mf4_t vs1, size_t vl); -vbfloat16mf4_t __riscv_vfmv_v_f_bf16mf4(__bf16 rs1, size_t vl); vbfloat16mf2_t __riscv_vmv_v_v_bf16mf2(vbfloat16mf2_t vs1, size_t vl); -vbfloat16mf2_t __riscv_vfmv_v_f_bf16mf2(__bf16 rs1, size_t vl); vbfloat16m1_t __riscv_vmv_v_v_bf16m1(vbfloat16m1_t vs1, size_t vl); -vbfloat16m1_t __riscv_vfmv_v_f_bf16m1(__bf16 rs1, size_t vl); vbfloat16m2_t __riscv_vmv_v_v_bf16m2(vbfloat16m2_t vs1, size_t vl); -vbfloat16m2_t __riscv_vfmv_v_f_bf16m2(__bf16 rs1, size_t vl); vbfloat16m4_t __riscv_vmv_v_v_bf16m4(vbfloat16m4_t vs1, size_t vl); -vbfloat16m4_t __riscv_vfmv_v_f_bf16m4(__bf16 rs1, size_t vl); vbfloat16m8_t __riscv_vmv_v_v_bf16m8(vbfloat16m8_t vs1, size_t vl); -vbfloat16m8_t __riscv_vfmv_v_f_bf16m8(__bf16 rs1, size_t vl); ---- [[vector-bf16-merge]] diff --git a/auto-generated/bfloat16/llvm-api-tests/vfmv.c b/auto-generated/bfloat16/llvm-api-tests/vfmv.c deleted file mode 100644 index 127e9508e..000000000 --- a/auto-generated/bfloat16/llvm-api-tests/vfmv.c +++ /dev/null @@ -1,32 +0,0 @@ -// REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +zvfbfmin \ -// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ -// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ -// RUN: FileCheck --check-prefix=CHECK-RV64 %s - -#include - -vbfloat16mf4_t test_vfmv_v_f_bf16mf4(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16mf4(rs1, vl); -} - -vbfloat16mf2_t test_vfmv_v_f_bf16mf2(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16mf2(rs1, vl); -} - -vbfloat16m1_t test_vfmv_v_f_bf16m1(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m1(rs1, vl); -} - -vbfloat16m2_t test_vfmv_v_f_bf16m2(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m2(rs1, vl); -} - -vbfloat16m4_t test_vfmv_v_f_bf16m4(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m4(rs1, vl); -} - -vbfloat16m8_t test_vfmv_v_f_bf16m8(__bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m8(rs1, vl); -} diff --git a/auto-generated/bfloat16/policy_funcs/api-testing/vfmv.c b/auto-generated/bfloat16/policy_funcs/api-testing/vfmv.c deleted file mode 100644 index 60bf77dec..000000000 --- a/auto-generated/bfloat16/policy_funcs/api-testing/vfmv.c +++ /dev/null @@ -1,28 +0,0 @@ -#include -#include - -vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, - size_t vl) { - return __riscv_vfmv_v_f_bf16mf4_tu(vd, rs1, vl); -} - -vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, - size_t vl) { - return __riscv_vfmv_v_f_bf16mf2_tu(vd, rs1, vl); -} - -vbfloat16m1_t test_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m1_tu(vd, rs1, vl); -} - -vbfloat16m2_t test_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m2_tu(vd, rs1, vl); -} - -vbfloat16m4_t test_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m4_tu(vd, rs1, vl); -} - -vbfloat16m8_t test_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m8_tu(vd, rs1, vl); -} diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc index d9d08e3a6..15ebdc590 100644 --- a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs.adoc @@ -2862,28 +2862,16 @@ vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, ---- vbfloat16mf4_t __riscv_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, size_t vl); -vbfloat16mf4_t __riscv_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, - size_t vl); vbfloat16mf2_t __riscv_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, size_t vl); -vbfloat16mf2_t __riscv_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, - size_t vl); vbfloat16m1_t __riscv_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, size_t vl); -vbfloat16m1_t __riscv_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, - size_t vl); vbfloat16m2_t __riscv_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, size_t vl); -vbfloat16m2_t __riscv_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, - size_t vl); vbfloat16m4_t __riscv_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, size_t vl); -vbfloat16m4_t __riscv_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, - size_t vl); vbfloat16m8_t __riscv_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, size_t vl); -vbfloat16m8_t __riscv_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, - size_t vl); ---- [[policy-variant-vector-bf16-merge]] diff --git a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc index 4889c6e03..0c92dcf2e 100644 --- a/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc +++ b/auto-generated/bfloat16/policy_funcs/intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -279,28 +279,16 @@ vfloat32m8_t __riscv_vfwmaccbf16_vf_f32m8_rm_mu(vbool4_t vm, vfloat32m8_t vd, ---- vbfloat16mf4_t __riscv_vmv_v_v_bf16mf4_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, size_t vl); -vbfloat16mf4_t __riscv_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, - size_t vl); vbfloat16mf2_t __riscv_vmv_v_v_bf16mf2_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, size_t vl); -vbfloat16mf2_t __riscv_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, - size_t vl); vbfloat16m1_t __riscv_vmv_v_v_bf16m1_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, size_t vl); -vbfloat16m1_t __riscv_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, - size_t vl); vbfloat16m2_t __riscv_vmv_v_v_bf16m2_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, size_t vl); -vbfloat16m2_t __riscv_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, - size_t vl); vbfloat16m4_t __riscv_vmv_v_v_bf16m4_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, size_t vl); -vbfloat16m4_t __riscv_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, - size_t vl); vbfloat16m8_t __riscv_vmv_v_v_bf16m8_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, size_t vl); -vbfloat16m8_t __riscv_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, - size_t vl); ---- [[policy-variant-vector-bf16-merge]] diff --git a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c b/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c deleted file mode 100644 index c1537f987..000000000 --- a/auto-generated/bfloat16/policy_funcs/llvm-api-tests/vfmv.c +++ /dev/null @@ -1,34 +0,0 @@ -// REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +zvfbfmin \ -// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ -// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ -// RUN: FileCheck --check-prefix=CHECK-RV64 %s - -#include - -vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, - size_t vl) { - return __riscv_vfmv_v_f_bf16mf4_tu(vd, rs1, vl); -} - -vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, - size_t vl) { - return __riscv_vfmv_v_f_bf16mf2_tu(vd, rs1, vl); -} - -vbfloat16m1_t test_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m1_tu(vd, rs1, vl); -} - -vbfloat16m2_t test_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m2_tu(vd, rs1, vl); -} - -vbfloat16m4_t test_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m4_tu(vd, rs1, vl); -} - -vbfloat16m8_t test_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_f_bf16m8_tu(vd, rs1, vl); -} diff --git a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c b/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c deleted file mode 100644 index cf30abe49..000000000 --- a/auto-generated/bfloat16/policy_funcs/llvm-overloaded-tests/vfmv.c +++ /dev/null @@ -1,34 +0,0 @@ -// REQUIRES: riscv-registered-target -// RUN: %clang_cc1 -triple riscv64 -target-feature +v \ -// RUN: -target-feature +zvfbfmin \ -// RUN: -target-feature +zvfbfwma -disable-O0-optnone \ -// RUN: -emit-llvm %s -o - | opt -S -passes=mem2reg | \ -// RUN: FileCheck --check-prefix=CHECK-RV64 %s - -#include - -vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, - size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, - size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16m1_t test_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16m2_t test_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16m4_t test_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16m8_t test_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfmv.c b/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfmv.c deleted file mode 100644 index 220bd16fa..000000000 --- a/auto-generated/bfloat16/policy_funcs/overloaded-api-testing/vfmv.c +++ /dev/null @@ -1,28 +0,0 @@ -#include -#include - -vbfloat16mf4_t test_vfmv_v_f_bf16mf4_tu(vbfloat16mf4_t vd, __bf16 rs1, - size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16mf2_t test_vfmv_v_f_bf16mf2_tu(vbfloat16mf2_t vd, __bf16 rs1, - size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16m1_t test_vfmv_v_f_bf16m1_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16m2_t test_vfmv_v_f_bf16m2_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16m4_t test_vfmv_v_f_bf16m4_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} - -vbfloat16m8_t test_vfmv_v_f_bf16m8_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl) { - return __riscv_vfmv_v_tu(vd, rs1, vl); -} diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc index 2b6578d84..e4259bce4 100644 --- a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs.adoc @@ -2076,18 +2076,12 @@ vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, ---- vbfloat16mf4_t __riscv_vmv_v_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, size_t vl); -vbfloat16mf4_t __riscv_vfmv_v_tu(vbfloat16mf4_t vd, __bf16 rs1, size_t vl); vbfloat16mf2_t __riscv_vmv_v_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, size_t vl); -vbfloat16mf2_t __riscv_vfmv_v_tu(vbfloat16mf2_t vd, __bf16 rs1, size_t vl); vbfloat16m1_t __riscv_vmv_v_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, size_t vl); -vbfloat16m1_t __riscv_vfmv_v_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl); vbfloat16m2_t __riscv_vmv_v_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, size_t vl); -vbfloat16m2_t __riscv_vfmv_v_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl); vbfloat16m4_t __riscv_vmv_v_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, size_t vl); -vbfloat16m4_t __riscv_vfmv_v_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl); vbfloat16m8_t __riscv_vmv_v_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, size_t vl); -vbfloat16m8_t __riscv_vfmv_v_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl); ---- [[policy-variant-overloadedvector-bf16-merge]] diff --git a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc index 3f586b00a..05048e1c7 100644 --- a/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc +++ b/auto-generated/bfloat16/policy_funcs/overloaded_intrinsic_funcs/03_bfloat16_arithmetic_intrinsics.adoc @@ -238,18 +238,12 @@ vfloat32m8_t __riscv_vfwmaccbf16_mu(vbool4_t vm, vfloat32m8_t vd, __bf16 vs1, ---- vbfloat16mf4_t __riscv_vmv_v_tu(vbfloat16mf4_t vd, vbfloat16mf4_t vs1, size_t vl); -vbfloat16mf4_t __riscv_vfmv_v_tu(vbfloat16mf4_t vd, __bf16 rs1, size_t vl); vbfloat16mf2_t __riscv_vmv_v_tu(vbfloat16mf2_t vd, vbfloat16mf2_t vs1, size_t vl); -vbfloat16mf2_t __riscv_vfmv_v_tu(vbfloat16mf2_t vd, __bf16 rs1, size_t vl); vbfloat16m1_t __riscv_vmv_v_tu(vbfloat16m1_t vd, vbfloat16m1_t vs1, size_t vl); -vbfloat16m1_t __riscv_vfmv_v_tu(vbfloat16m1_t vd, __bf16 rs1, size_t vl); vbfloat16m2_t __riscv_vmv_v_tu(vbfloat16m2_t vd, vbfloat16m2_t vs1, size_t vl); -vbfloat16m2_t __riscv_vfmv_v_tu(vbfloat16m2_t vd, __bf16 rs1, size_t vl); vbfloat16m4_t __riscv_vmv_v_tu(vbfloat16m4_t vd, vbfloat16m4_t vs1, size_t vl); -vbfloat16m4_t __riscv_vfmv_v_tu(vbfloat16m4_t vd, __bf16 rs1, size_t vl); vbfloat16m8_t __riscv_vmv_v_tu(vbfloat16m8_t vd, vbfloat16m8_t vs1, size_t vl); -vbfloat16m8_t __riscv_vfmv_v_tu(vbfloat16m8_t vd, __bf16 rs1, size_t vl); ---- [[policy-variant-overloadedvector-bf16-merge]] From d5cd0945ce6c10d9d00a8ae20dde25eac1fb775d Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Wed, 16 Oct 2024 16:25:16 +0800 Subject: [PATCH 145/151] report: fix wrong categorize logic causing unexpected pass (#373) Signed-off-by: Jerry Zhang Jian --- rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report b/rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report index 68024cd3f..73cfd7f2e 100755 --- a/rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report +++ b/rvv-intrinsic-generator/rvv_intrinsic_gen/testing-report @@ -90,8 +90,8 @@ def api_testing_report(stats, opts): # passed if the number of warning equal to line number with open(log, 'r') as fp: last_line = fp.readlines()[-1] - if last_line.count('error') == 0: - if last_line.count('warning') != 0 and opts.warning_as_error: + if opts.warning_as_error: + if 'error' not in last_line and 'warning' in last_line: stats[grp][subgrp].failed_list.append(testname) result.failed_list.append(testname) test_case.result = [Error("Treat warning as error", "warning")] From 61090d5828e64f0252a7e70a4d39e26d7eb8664e Mon Sep 17 00:00:00 2001 From: Camel Coder Date: Mon, 14 Oct 2024 18:07:33 +0200 Subject: [PATCH 146/151] Encourage versioned compatibility check Signed-off-by: Camel Coder --- doc/rvv-intrinsic-spec.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/rvv-intrinsic-spec.adoc b/doc/rvv-intrinsic-spec.adoc index 7c9c5570e..e3dbed859 100644 --- a/doc/rvv-intrinsic-spec.adoc +++ b/doc/rvv-intrinsic-spec.adoc @@ -22,7 +22,7 @@ To leverage the intrinsics in the toolchain, the header `` needs [,c] ---- -#ifdef __riscv_v_intrinsic +#if __riscv_v_intrinsic >= 1000000 #include #endif /* __riscv_v_intrinsic */ ---- From 1e9f86b9f5110f11664d0de6381e5baf68f0d7d2 Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Thu, 17 Oct 2024 15:11:42 +0800 Subject: [PATCH 147/151] Update name for Olaf in perface I guess we don't mapping github account to the name correctly during making preface.adoc :P --- doc/preface.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/preface.adoc b/doc/preface.adoc index bf0d5aa66..5c161e570 100644 --- a/doc/preface.adoc +++ b/doc/preface.adoc @@ -17,7 +17,6 @@ This RISC-V specification has been contributed to directly or indirectly by (in Contributors to all versions of the spec in alphabetical order: Brandon Wu, -Camel Coder, Craig Topper, Eop Chen, HanKuan Chen, @@ -25,6 +24,7 @@ HsiangKai Wang, Jerry Zhang Jian, Kito Cheng, Nick Knight, +Olaf Bernstein, Roger Ferrer Ibanez, Yi-Hsiu Hsu, Zakk Chen From d0669e207f95afab589daec62582b1e315399a9c Mon Sep 17 00:00:00 2001 From: Jerry Zhang Jian Date: Thu, 17 Oct 2024 15:19:35 +0800 Subject: [PATCH 148/151] github: setup test matrix for LLVM API tests (#372) * github: setup test matrix for LLVM API tests - Add test matrix to run LLVM API tests with the following two LLVM versions - Latest release tag - Latest trunk commit Signed-off-by: Jerry Zhang Jian * github: make full use of vCPU when building LLVM Signed-off-by: Jerry Zhang Jian --------- Signed-off-by: Jerry Zhang Jian --- .github/workflows/clang-compilation.yml | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/.github/workflows/clang-compilation.yml b/.github/workflows/clang-compilation.yml index 74a114829..dae68e44f 100644 --- a/.github/workflows/clang-compilation.yml +++ b/.github/workflows/clang-compilation.yml @@ -4,6 +4,9 @@ on: [push] jobs: build: runs-on: ubuntu-latest + strategy: + matrix: + llvm-version: ["main", "latest-rel"] steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 @@ -19,7 +22,14 @@ jobs: run: | cd .. rm -rf llvm-project - git clone https://github.com/llvm/llvm-project + git clone https://github.com/llvm/llvm-project -j `nproc` + - name: Checkout LLVM version + run: | + cd ../llvm-project + if [ "${{ matrix.llvm-version }}" = "latest-rel" ]; then + latestTag=$(git describe --tags `git rev-list --tags --max-count=1`) + git checkout $latestTag + fi - name: Build LLVM with Ninja run: | cd ../llvm-project @@ -34,7 +44,7 @@ jobs: -DLLVM_DEFAULT_TARGET_TRIPLE="riscv64-unknown-linux-gnu" \ -DLLVM_ENABLE_PROJECTS="clang;lld" \ ../llvm - ninja -j 4 + ninja -j `nproc` echo $(pwd) ls bin - name: Run compilation test, non-overloaded intrinsics (default (TAMA) policy) From 3d81513dc217f91130e0cf86ec2eb82432f73fd8 Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Tue, 19 Nov 2024 10:40:10 +0800 Subject: [PATCH 149/151] Update README for the compiler support Fix #361 --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 09145f67b..c0e80dbbe 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,8 @@ Working draft for the RISC-V vector specification are under [doc/](doc/), intrin Please check out the latest intrinsics specification under [Releases](https://github.com/riscv-non-isa/rvv-intrinsic-doc/releases). +[Clang 19](https://github.com/llvm/llvm-project/blob/llvmorg-19.1.0/llvm/docs/RISCV/RISCVVectorExtension.rst) and [GCC 14](https://github.com/gcc-mirror/gcc/tree/releases/gcc-14) supports the [v1.0](https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/v1.0.x) version. + [Clang 17](https://releases.llvm.org/17.0.1/tools/clang/docs/ReleaseNotes.html) and [GCC trunk](https://github.com/gcc-mirror/gcc/tree/master) supports the [v0.12](https://github.com/riscv-non-isa/rvv-intrinsic-doc/releases/tag/v0.12.0) version, no more incompatibility will be introduced. [Clang 16](https://releases.llvm.org/16.0.0/tools/clang/docs/ReleaseNotes.html) and From 39bd4ba464b0acf675bd96756f3a6831967f17f0 Mon Sep 17 00:00:00 2001 From: Kito Cheng Date: Tue, 19 Nov 2024 10:25:09 +0800 Subject: [PATCH 150/151] `__riscv_v_intrinsic` should always define if compiler support RVV intrinsics. Fix #376 --- doc/rvv-intrinsic-spec.adoc | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc/rvv-intrinsic-spec.adoc b/doc/rvv-intrinsic-spec.adoc index e3dbed859..5f9260805 100644 --- a/doc/rvv-intrinsic-spec.adoc +++ b/doc/rvv-intrinsic-spec.adoc @@ -8,6 +8,8 @@ This document uses the term "RVV" as an abbreviation for the RISC-V "V" extensio The `__riscv_v_intrinsic` macro is the C macro to test the compiler's support for the RISC-V "V" extension intrinsics. +This macro should be defined even if the vector extension is not enabled. + The value of the test macro is defined as its version, which is computed using the following formula. The formula is identical to what is defined in the RISC-V C API specification cite:[riscv-c-api]. ---- From b89859be2d016c28b6831e557846f06db06d14dd Mon Sep 17 00:00:00 2001 From: Kevin Broch Date: Tue, 16 Jul 2024 14:23:07 -0700 Subject: [PATCH 151/151] add dependabot to create PR to update the submodules to latest Signed-off-by: Kevin Broch --- .github/dependabot.yml | 8 ++++++++ 1 file changed, 8 insertions(+) create mode 100644 .github/dependabot.yml diff --git a/.github/dependabot.yml b/.github/dependabot.yml new file mode 100644 index 000000000..34d92e47e --- /dev/null +++ b/.github/dependabot.yml @@ -0,0 +1,8 @@ +--- +# https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#package-ecosystem +version: 2 +updates: + - package-ecosystem: gitsubmodule + directory: / + schedule: + interval: daily